* Re: [dpdk-dev] [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
2021-09-13 6:56 0% ` Xu, Rosen
@ 2021-09-13 7:22 0% ` Andrew Rybchenko
0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-09-13 7:22 UTC (permalink / raw)
To: Xu, Rosen, Anoob Joseph, Yigit, Ferruh, Andrew Rybchenko
Cc: Nicolau, Radu, Doherty, Declan, hemant.agrawal, matan, Ananyev,
Konstantin, thomas, Ankur Dwivedi, Akhil Goyal, dev
On 9/13/21 9:56 AM, Xu, Rosen wrote:
> Hi,
>
>> -----Original Message-----
>> From: Anoob Joseph <anoobj@marvell.com>
>> Sent: Wednesday, September 08, 2021 18:30
>> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Xu, Rosen <rosen.xu@intel.com>;
>> Andrew Rybchenko <arybchenko@solarflare.com>
>> Cc: Nicolau, Radu <radu.nicolau@intel.com>; Doherty, Declan
>> <declan.doherty@intel.com>; hemant.agrawal@nxp.com;
>> matan@nvidia.com; Ananyev, Konstantin <konstantin.ananyev@intel.com>;
>> thomas@monjalon.net; Ankur Dwivedi <adwivedi@marvell.com>;
>> andrew.rybchenko@oktetlabs.ru; Akhil Goyal <gakhil@marvell.com>;
>> dev@dpdk.org
>> Subject: RE: [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
>>
>> Hi Ferruh, Rosen, Andrew,
>>
>> Please see inline.
>>
>> Thanks,
>> Anoob
>>
>>> Subject: [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
>>>
>>> External Email
>>>
>>> ----------------------------------------------------------------------
>>> On 8/23/2021 11:02 AM, Akhil Goyal wrote:
>>>> Reassembly is a costly operation if it is done in software, however,
>>>> if it is offloaded to HW, it can considerably save application cycles.
>>>> The operation becomes even more costlier if IP fragmants are
>>>> encrypted.
>>>>
>>>> To resolve above two issues, a new offload
>>> DEV_RX_OFFLOAD_REASSEMBLY
>>>> is introduced in ethdev for devices which can attempt reassembly of
>>>> packets in hardware.
>>>> rte_eth_dev_info is added with the reassembly capabilities which a
>>>> device can support.
>>>> Now, if IP fragments are encrypted, reassembly can also be attempted
>>>> while doing inline IPsec processing.
>>>> This is controlled by a flag in rte_security_ipsec_sa_options to
>>>> enable reassembly of encrypted IP fragments in the inline path.
>>>>
>>>> The resulting reassembled packet would be a typical segmented mbuf
>>>> in case of success.
>>>>
>>>> And if reassembly of fragments is failed or is incomplete (if
>>>> fragments do not come before the reass_timeout), the mbuf is updated
>>>> with an ol_flag PKT_RX_REASSEMBLY_INCOMPLETE and mbuf is returned
>>> as
>>>> is. Now application may decide the fate of the packet to wait more
>>>> for fragments to come or drop.
>>>>
>>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
>>>> ---
>>>> lib/ethdev/rte_ethdev.c | 1 +
>>>> lib/ethdev/rte_ethdev.h | 18 +++++++++++++++++-
>>>> lib/mbuf/rte_mbuf_core.h | 3 ++-
>>>> lib/security/rte_security.h | 10 ++++++++++
>>>> 4 files changed, 30 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index
>>>> 9d95cd11e1..1ab3a093cf 100644
>>>> --- a/lib/ethdev/rte_ethdev.c
>>>> +++ b/lib/ethdev/rte_ethdev.c
>>>> @@ -119,6 +119,7 @@ static const struct {
>>>> RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
>>>> RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
>>>> RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
>>>> + RTE_RX_OFFLOAD_BIT2STR(REASSEMBLY),
>>>> RTE_RX_OFFLOAD_BIT2STR(SCATTER),
>>>> RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
>>>> RTE_RX_OFFLOAD_BIT2STR(SECURITY),
>>>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
>>>> d2b27c351f..e89a4dc1eb 100644
>>>> --- a/lib/ethdev/rte_ethdev.h
>>>> +++ b/lib/ethdev/rte_ethdev.h
>>>> @@ -1360,6 +1360,7 @@ struct rte_eth_conf {
>>>> #define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
>>>> #define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
>>>> #define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
>>>> +#define DEV_RX_OFFLOAD_REASSEMBLY 0x00001000
>>>
>>> previous '0x00001000' was 'DEV_RX_OFFLOAD_CRC_STRIP', it has been
>> long
>>> that offload has been removed, but not sure if it cause any problem to
>>> re- use it.
>>>
>>>> #define DEV_RX_OFFLOAD_SCATTER 0x00002000
>>>> /**
>>>> * Timestamp is set by the driver in
>>> RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
>>>> @@ -1477,6 +1478,20 @@ struct rte_eth_dev_portconf {
>>>> */
>>>> #define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID
>>> (UINT16_MAX)
>>>>
>>>> +/**
>>>> + * Reassembly capabilities that a device can support.
>>>> + * The device which can support reassembly offload should set
>>>> + * DEV_RX_OFFLOAD_REASSEMBLY
>>>> + */
>>>> +struct rte_eth_reass_capa {
>>>> + /** Maximum time in ns that a fragment can wait for further
>>> fragments */
>>>> + uint64_t reass_timeout;
>>>> + /** Maximum number of fragments that device can reassemble */
>>>> + uint16_t max_frags;
>>>> + /** Reserved for future capabilities */
>>>> + uint16_t reserved[3];
>>>> +};
>>>> +
>>>
>>> I wonder if there is any other hardware around supports reassembly
>>> offload, it would be good to get more feedback on the capabilities list.
>>>
>>>> /**
>>>> * Ethernet device associated switch information
>>>> */
>>>> @@ -1582,8 +1597,9 @@ struct rte_eth_dev_info {
>>>> * embedded managed interconnect/switch.
>>>> */
>>>> struct rte_eth_switch_info switch_info;
>>>> + /* Reassembly capabilities of a device for reassembly offload */
>>>> + struct rte_eth_reass_capa reass_capa;
>>>>
>>>> - uint64_t reserved_64s[2]; /**< Reserved for future fields */
>>>
>>> Reserved fields were added to be able to update the struct without
>>> breaking the ABI, so that a critical change doesn't have to wait until
>>> next ABI break release.
>>> Since this is ABI break release, we can keep the reserved field and
>>> add the new struct. Or this can be an opportunity to get rid of the reserved
>> field.
>>>
>>> Personally I have no objection to get rid of the reserved field, but
>>> better to agree on this explicitly.
>>>
>>>> void *reserved_ptrs[2]; /**< Reserved for future fields */
>>>> };
>>>>
>>>> diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
>>>> index
>>>> bb38d7f581..cea25c87f7 100644
>>>> --- a/lib/mbuf/rte_mbuf_core.h
>>>> +++ b/lib/mbuf/rte_mbuf_core.h
>>>> @@ -200,10 +200,11 @@ extern "C" {
>>>> #define PKT_RX_OUTER_L4_CKSUM_BAD (1ULL << 21)
>>>> #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
>>>> #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL
>>> << 22))
>>>> +#define PKT_RX_REASSEMBLY_INCOMPLETE (1ULL << 23)
>>>>
>>>
>>> Similar comment with Andrew's, what is the expectation from
>>> application if this flag exists? Can we drop it to simplify the logic in the
>> application?
>>
>> [Anoob] There can be few cases where hardware/NIC attempts inline
>> reassembly but it fails to complete it
>>
>> 1. Number of fragments is larger than what is supported by the hardware 2.
>> Hardware reassembly resources are exhausted (due to limited reassembly
>> contexts etc) 3. Reassembly errors such as overlapping fragments 4. Wait
>> time exhausted (or reassembly timeout)
>>
>> In such cases, application would be required to retrieve the original
>> fragments so that it can attempt reassembly in software. The incomplete flag
>> is useful for 2 purposes basically, 1. Application would need to retrieve the
>> time the fragment has already spend in hardware reassembly so that
>> software reassembly attempt can compensate for it. Otherwise, reassembly
>> timeout across hardware + software will not be accurate
Could you clarify how application will find out the time spent
in HW.
>> 2. Retrieve original
>> fragments. With this proposal, an incomplete reassembly would result in a
>> chained mbuf but the segments need not be consecutive. To explain bit more,
>>
>> Suppose we have a packet that is fragmented into 3 fragments, and fragment
>> 3 & fragment 1 arrives in that order. Fragment 2 didn't arrive and hardware
>> ultimately pushes it. In that case, application would be receiving a
>> chained/segmented mbuf with fragment 1 & fragment 3 chained.
>>
>> Now, this chained mbuf can't be treated like a regular chained mbuf. Each
>> fragment would have its IP hdr and there are fragments missing in between.
>> The only thing application is expected to do is, retrieve fragments, push it to
>> s/w reassembly.
It sounds like it conflicts with SCATTER and BUFFER_SPLIT
offloads which allow to return chained mbuf's. Don't know
if it is good or bad, but anyway it must be documented.
>
> What you mentioned is error identification. But actually a negotiation about max frame size is needed before datagrams tx/rx.
It sounds like it is OK for informational purposes, but
right now I don't understand how it could be used by the
application. Application still has to support reassembly
in SW regardless of the information.
>>>
>>>> /* add new RX flags here, don't forget to update PKT_FIRST_FREE */
>>>>
>>>> -#define PKT_FIRST_FREE (1ULL << 23)
>>>> +#define PKT_FIRST_FREE (1ULL << 24)
>>>> #define PKT_LAST_FREE (1ULL << 40)
>>>>
>>>> /* add new TX flags here, don't forget to update PKT_LAST_FREE */
>>>> diff --git a/lib/security/rte_security.h
>>>> b/lib/security/rte_security.h index 88d31de0a6..364eeb5cd4 100644
>>>> --- a/lib/security/rte_security.h
>>>> +++ b/lib/security/rte_security.h
>>>> @@ -181,6 +181,16 @@ struct rte_security_ipsec_sa_options {
>>>> * * 0: Disable per session security statistics collection for this SA.
>>>> */
>>>> uint32_t stats : 1;
>>>> +
>>>> + /** Enable reassembly on incoming packets.
>>>> + *
>>>> + * * 1: Enable driver to try reassembly of encrypted IP packets for
>>>> + * this SA, if supported by the driver. This feature will work
>>>> + * only if rx_offload DEV_RX_OFFLOAD_REASSEMBLY is set in
>>>> + * inline ethernet device.
>>>> + * * 0: Disable reassembly of packets (default).
>>>> + */
>>>> + uint32_t reass_en : 1;
>>>> };
>>>>
>>>> /** IPSec security association direction */
>>>>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
2021-09-08 10:29 0% ` [dpdk-dev] [EXT] " Anoob Joseph
@ 2021-09-13 6:56 0% ` Xu, Rosen
2021-09-13 7:22 0% ` Andrew Rybchenko
0 siblings, 1 reply; 200+ results
From: Xu, Rosen @ 2021-09-13 6:56 UTC (permalink / raw)
To: Anoob Joseph, Yigit, Ferruh, Andrew Rybchenko
Cc: Nicolau, Radu, Doherty, Declan, hemant.agrawal, matan, Ananyev,
Konstantin, thomas, Ankur Dwivedi, andrew.rybchenko, Akhil Goyal,
dev, Xu, Rosen
Hi,
> -----Original Message-----
> From: Anoob Joseph <anoobj@marvell.com>
> Sent: Wednesday, September 08, 2021 18:30
> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Xu, Rosen <rosen.xu@intel.com>;
> Andrew Rybchenko <arybchenko@solarflare.com>
> Cc: Nicolau, Radu <radu.nicolau@intel.com>; Doherty, Declan
> <declan.doherty@intel.com>; hemant.agrawal@nxp.com;
> matan@nvidia.com; Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> thomas@monjalon.net; Ankur Dwivedi <adwivedi@marvell.com>;
> andrew.rybchenko@oktetlabs.ru; Akhil Goyal <gakhil@marvell.com>;
> dev@dpdk.org
> Subject: RE: [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
>
> Hi Ferruh, Rosen, Andrew,
>
> Please see inline.
>
> Thanks,
> Anoob
>
> > Subject: [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
> >
> > External Email
> >
> > ----------------------------------------------------------------------
> > On 8/23/2021 11:02 AM, Akhil Goyal wrote:
> > > Reassembly is a costly operation if it is done in software, however,
> > > if it is offloaded to HW, it can considerably save application cycles.
> > > The operation becomes even more costlier if IP fragmants are
> > > encrypted.
> > >
> > > To resolve above two issues, a new offload
> > DEV_RX_OFFLOAD_REASSEMBLY
> > > is introduced in ethdev for devices which can attempt reassembly of
> > > packets in hardware.
> > > rte_eth_dev_info is added with the reassembly capabilities which a
> > > device can support.
> > > Now, if IP fragments are encrypted, reassembly can also be attempted
> > > while doing inline IPsec processing.
> > > This is controlled by a flag in rte_security_ipsec_sa_options to
> > > enable reassembly of encrypted IP fragments in the inline path.
> > >
> > > The resulting reassembled packet would be a typical segmented mbuf
> > > in case of success.
> > >
> > > And if reassembly of fragments is failed or is incomplete (if
> > > fragments do not come before the reass_timeout), the mbuf is updated
> > > with an ol_flag PKT_RX_REASSEMBLY_INCOMPLETE and mbuf is returned
> > as
> > > is. Now application may decide the fate of the packet to wait more
> > > for fragments to come or drop.
> > >
> > > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > > ---
> > > lib/ethdev/rte_ethdev.c | 1 +
> > > lib/ethdev/rte_ethdev.h | 18 +++++++++++++++++-
> > > lib/mbuf/rte_mbuf_core.h | 3 ++-
> > > lib/security/rte_security.h | 10 ++++++++++
> > > 4 files changed, 30 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index
> > > 9d95cd11e1..1ab3a093cf 100644
> > > --- a/lib/ethdev/rte_ethdev.c
> > > +++ b/lib/ethdev/rte_ethdev.c
> > > @@ -119,6 +119,7 @@ static const struct {
> > > RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
> > > RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
> > > RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
> > > + RTE_RX_OFFLOAD_BIT2STR(REASSEMBLY),
> > > RTE_RX_OFFLOAD_BIT2STR(SCATTER),
> > > RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
> > > RTE_RX_OFFLOAD_BIT2STR(SECURITY),
> > > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> > > d2b27c351f..e89a4dc1eb 100644
> > > --- a/lib/ethdev/rte_ethdev.h
> > > +++ b/lib/ethdev/rte_ethdev.h
> > > @@ -1360,6 +1360,7 @@ struct rte_eth_conf {
> > > #define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
> > > #define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
> > > #define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
> > > +#define DEV_RX_OFFLOAD_REASSEMBLY 0x00001000
> >
> > previous '0x00001000' was 'DEV_RX_OFFLOAD_CRC_STRIP', it has been
> long
> > that offload has been removed, but not sure if it cause any problem to
> > re- use it.
> >
> > > #define DEV_RX_OFFLOAD_SCATTER 0x00002000
> > > /**
> > > * Timestamp is set by the driver in
> > RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
> > > @@ -1477,6 +1478,20 @@ struct rte_eth_dev_portconf {
> > > */
> > > #define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID
> > (UINT16_MAX)
> > >
> > > +/**
> > > + * Reassembly capabilities that a device can support.
> > > + * The device which can support reassembly offload should set
> > > + * DEV_RX_OFFLOAD_REASSEMBLY
> > > + */
> > > +struct rte_eth_reass_capa {
> > > + /** Maximum time in ns that a fragment can wait for further
> > fragments */
> > > + uint64_t reass_timeout;
> > > + /** Maximum number of fragments that device can reassemble */
> > > + uint16_t max_frags;
> > > + /** Reserved for future capabilities */
> > > + uint16_t reserved[3];
> > > +};
> > > +
> >
> > I wonder if there is any other hardware around supports reassembly
> > offload, it would be good to get more feedback on the capabilities list.
> >
> > > /**
> > > * Ethernet device associated switch information
> > > */
> > > @@ -1582,8 +1597,9 @@ struct rte_eth_dev_info {
> > > * embedded managed interconnect/switch.
> > > */
> > > struct rte_eth_switch_info switch_info;
> > > + /* Reassembly capabilities of a device for reassembly offload */
> > > + struct rte_eth_reass_capa reass_capa;
> > >
> > > - uint64_t reserved_64s[2]; /**< Reserved for future fields */
> >
> > Reserved fields were added to be able to update the struct without
> > breaking the ABI, so that a critical change doesn't have to wait until
> > next ABI break release.
> > Since this is ABI break release, we can keep the reserved field and
> > add the new struct. Or this can be an opportunity to get rid of the reserved
> field.
> >
> > Personally I have no objection to get rid of the reserved field, but
> > better to agree on this explicitly.
> >
> > > void *reserved_ptrs[2]; /**< Reserved for future fields */
> > > };
> > >
> > > diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
> > > index
> > > bb38d7f581..cea25c87f7 100644
> > > --- a/lib/mbuf/rte_mbuf_core.h
> > > +++ b/lib/mbuf/rte_mbuf_core.h
> > > @@ -200,10 +200,11 @@ extern "C" {
> > > #define PKT_RX_OUTER_L4_CKSUM_BAD (1ULL << 21)
> > > #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> > > #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL
> > << 22))
> > > +#define PKT_RX_REASSEMBLY_INCOMPLETE (1ULL << 23)
> > >
> >
> > Similar comment with Andrew's, what is the expectation from
> > application if this flag exists? Can we drop it to simplify the logic in the
> application?
>
> [Anoob] There can be few cases where hardware/NIC attempts inline
> reassembly but it fails to complete it
>
> 1. Number of fragments is larger than what is supported by the hardware 2.
> Hardware reassembly resources are exhausted (due to limited reassembly
> contexts etc) 3. Reassembly errors such as overlapping fragments 4. Wait
> time exhausted (or reassembly timeout)
>
> In such cases, application would be required to retrieve the original
> fragments so that it can attempt reassembly in software. The incomplete flag
> is useful for 2 purposes basically, 1. Application would need to retrieve the
> time the fragment has already spend in hardware reassembly so that
> software reassembly attempt can compensate for it. Otherwise, reassembly
> timeout across hardware + software will not be accurate 2. Retrieve original
> fragments. With this proposal, an incomplete reassembly would result in a
> chained mbuf but the segments need not be consecutive. To explain bit more,
>
> Suppose we have a packet that is fragmented into 3 fragments, and fragment
> 3 & fragment 1 arrives in that order. Fragment 2 didn't arrive and hardware
> ultimately pushes it. In that case, application would be receiving a
> chained/segmented mbuf with fragment 1 & fragment 3 chained.
>
> Now, this chained mbuf can't be treated like a regular chained mbuf. Each
> fragment would have its IP hdr and there are fragments missing in between.
> The only thing application is expected to do is, retrieve fragments, push it to
> s/w reassembly.
What you mentioned is error identification. But actually a negotiation about max frame size is needed before datagrams tx/rx.
> >
> > > /* add new RX flags here, don't forget to update PKT_FIRST_FREE */
> > >
> > > -#define PKT_FIRST_FREE (1ULL << 23)
> > > +#define PKT_FIRST_FREE (1ULL << 24)
> > > #define PKT_LAST_FREE (1ULL << 40)
> > >
> > > /* add new TX flags here, don't forget to update PKT_LAST_FREE */
> > > diff --git a/lib/security/rte_security.h
> > > b/lib/security/rte_security.h index 88d31de0a6..364eeb5cd4 100644
> > > --- a/lib/security/rte_security.h
> > > +++ b/lib/security/rte_security.h
> > > @@ -181,6 +181,16 @@ struct rte_security_ipsec_sa_options {
> > > * * 0: Disable per session security statistics collection for this SA.
> > > */
> > > uint32_t stats : 1;
> > > +
> > > + /** Enable reassembly on incoming packets.
> > > + *
> > > + * * 1: Enable driver to try reassembly of encrypted IP packets for
> > > + * this SA, if supported by the driver. This feature will work
> > > + * only if rx_offload DEV_RX_OFFLOAD_REASSEMBLY is set in
> > > + * inline ethernet device.
> > > + * * 0: Disable reassembly of packets (default).
> > > + */
> > > + uint32_t reass_en : 1;
> > > };
> > >
> > > /** IPSec security association direction */
> > >
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 1/6] bbdev: add capability for CRC16 check
2021-09-08 1:15 4% ` [dpdk-dev] [PATCH v3 1/6] bbdev: add capability for CRC16 check Nicolas Chautru
@ 2021-09-12 12:35 0% ` Tom Rix
0 siblings, 0 replies; 200+ results
From: Tom Rix @ 2021-09-12 12:35 UTC (permalink / raw)
To: Nicolas Chautru, dev, gakhil
Cc: thomas, hemant.agrawal, mingshan.zhang, arun.joshi
On 9/7/21 6:15 PM, Nicolas Chautru wrote:
> Adding a missing operation when CRC16
> is being used for TB CRC check.
>
> Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> ---
> app/test-bbdev/test_bbdev_vector.c | 2 ++
> doc/guides/prog_guide/bbdev.rst | 3 +++
> doc/guides/rel_notes/release_21_11.rst | 1 +
> lib/bbdev/rte_bbdev_op.h | 34 ++++++++++++++++++----------------
> 4 files changed, 24 insertions(+), 16 deletions(-)
>
> diff --git a/app/test-bbdev/test_bbdev_vector.c b/app/test-bbdev/test_bbdev_vector.c
> index 614dbd1..8d796b1 100644
> --- a/app/test-bbdev/test_bbdev_vector.c
> +++ b/app/test-bbdev/test_bbdev_vector.c
> @@ -167,6 +167,8 @@
> *op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK;
> else if (!strcmp(token, "RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP"))
> *op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP;
> + else if (!strcmp(token, "RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK"))
> + *op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK;
> else if (!strcmp(token, "RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS"))
> *op_flag_value = RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS;
> else if (!strcmp(token, "RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE"))
> diff --git a/doc/guides/prog_guide/bbdev.rst b/doc/guides/prog_guide/bbdev.rst
> index 9619280..8bd7cba 100644
> --- a/doc/guides/prog_guide/bbdev.rst
> +++ b/doc/guides/prog_guide/bbdev.rst
> @@ -891,6 +891,9 @@ given below.
> |RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP |
> | Set to drop the last CRC bits decoding output |
> +--------------------------------------------------------------------+
> +|RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK |
> +| Set for code block CRC-16 checking |
> ++--------------------------------------------------------------------+
> |RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS |
> | Set for bit-level de-interleaver bypass on input stream |
> +--------------------------------------------------------------------+
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index d707a55..69dd518 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -84,6 +84,7 @@ API Changes
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* bbdev: Added capability related to more comprehensive CRC options.
>
> ABI Changes
> -----------
> diff --git a/lib/bbdev/rte_bbdev_op.h b/lib/bbdev/rte_bbdev_op.h
> index f946842..7c44ddd 100644
> --- a/lib/bbdev/rte_bbdev_op.h
> +++ b/lib/bbdev/rte_bbdev_op.h
> @@ -142,51 +142,53 @@ enum rte_bbdev_op_ldpcdec_flag_bitmasks {
> RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK = (1ULL << 1),
> /** Set to drop the last CRC bits decoding output */
> RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP = (1ULL << 2),
> + /** Set for transport block CRC-16 checking */
> + RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK = (1ULL << 3),
> /** Set for bit-level de-interleaver bypass on Rx stream. */
> - RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS = (1ULL << 3),
> + RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS = (1ULL << 4),
> /** Set for HARQ combined input stream enable. */
> - RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE = (1ULL << 4),
> + RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE = (1ULL << 5),
> /** Set for HARQ combined output stream enable. */
> - RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE = (1ULL << 5),
> + RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE = (1ULL << 6),
> /** Set for LDPC decoder bypass.
> * RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE must be set.
> */
> - RTE_BBDEV_LDPC_DECODE_BYPASS = (1ULL << 6),
> + RTE_BBDEV_LDPC_DECODE_BYPASS = (1ULL << 7),
> /** Set for soft-output stream enable */
> - RTE_BBDEV_LDPC_SOFT_OUT_ENABLE = (1ULL << 7),
> + RTE_BBDEV_LDPC_SOFT_OUT_ENABLE = (1ULL << 8),
> /** Set for Rate-Matching bypass on soft-out stream. */
> - RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS = (1ULL << 8),
> + RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS = (1ULL << 9),
> /** Set for bit-level de-interleaver bypass on soft-output stream. */
> - RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS = (1ULL << 9),
> + RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS = (1ULL << 10),
> /** Set for iteration stopping on successful decode condition
> * i.e. a successful syndrome check.
> */
> - RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE = (1ULL << 10),
> + RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE = (1ULL << 11),
> /** Set if a device supports decoder dequeue interrupts. */
> - RTE_BBDEV_LDPC_DEC_INTERRUPTS = (1ULL << 11),
> + RTE_BBDEV_LDPC_DEC_INTERRUPTS = (1ULL << 12),
> /** Set if a device supports scatter-gather functionality. */
> - RTE_BBDEV_LDPC_DEC_SCATTER_GATHER = (1ULL << 12),
> + RTE_BBDEV_LDPC_DEC_SCATTER_GATHER = (1ULL << 13),
> /** Set if a device supports input/output HARQ compression. */
> - RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION = (1ULL << 13),
> + RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION = (1ULL << 14),
> /** Set if a device supports input LLR compression. */
> - RTE_BBDEV_LDPC_LLR_COMPRESSION = (1ULL << 14),
> + RTE_BBDEV_LDPC_LLR_COMPRESSION = (1ULL << 15),
> /** Set if a device supports HARQ input from
> * device's internal memory.
> */
> - RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE = (1ULL << 15),
> + RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE = (1ULL << 16),
> /** Set if a device supports HARQ output to
> * device's internal memory.
> */
> - RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE = (1ULL << 16),
> + RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE = (1ULL << 17),
> /** Set if a device supports loop-back access to
> * HARQ internal memory. Intended for troubleshooting.
> */
> - RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK = (1ULL << 17),
> + RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK = (1ULL << 18),
> /** Set if a device includes LLR filler bits in the circular buffer
> * for HARQ memory. If not set, it is assumed the filler bits are not
> * in HARQ memory and handled directly by the LDPC decoder.
> */
> - RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS = (1ULL << 18)
> + RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS = (1ULL << 19)
The fiddling in the middle is fine since bbdev api is experimental
Look good.
Reviewed-by: Tom Rix <trix@redhat.com>
> };
>
> /** Flags for LDPC encoder operation and capability structure */
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 1/6] bbdev: add capability for CRC16 check
2021-09-01 15:00 3% ` Chautru, Nicolas
@ 2021-09-11 19:11 0% ` Tom Rix
0 siblings, 0 replies; 200+ results
From: Tom Rix @ 2021-09-11 19:11 UTC (permalink / raw)
To: Chautru, Nicolas, dev, gakhil
Cc: thomas, hemant.agrawal, Zhang, Mingshan, Joshi, Arun
On 9/1/21 8:00 AM, Chautru, Nicolas wrote:
>
>> -----Original Message-----
>> From: Tom Rix <trix@redhat.com>
>> Sent: Wednesday, September 1, 2021 6:37 AM
>> To: Chautru, Nicolas <nicolas.chautru@intel.com>; dev@dpdk.org;
>> gakhil@marvell.com
>> Cc: thomas@monjalon.net; hemant.agrawal@nxp.com; Zhang, Mingshan
>> <mingshan.zhang@intel.com>; Joshi, Arun <arun.joshi@intel.com>
>> Subject: Re: [PATCH v2 1/6] bbdev: add capability for CRC16 check
>>
>>
>> On 8/19/21 2:10 PM, Nicolas Chautru wrote:
>>> Adding a missing operation when CRC16
>>> is being used for TB CRC check.
>>>
>>> Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
>>> ---
>>> app/test-bbdev/test_bbdev_vector.c | 2 ++
>>> doc/guides/prog_guide/bbdev.rst | 3 +++
>>> doc/guides/rel_notes/release_21_11.rst | 1 +
>>> lib/bbdev/rte_bbdev_op.h | 34 ++++++++++++++++++--------------
>> --
>>> 4 files changed, 24 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/app/test-bbdev/test_bbdev_vector.c
>>> b/app/test-bbdev/test_bbdev_vector.c
>>> index 614dbd1..8d796b1 100644
>>> --- a/app/test-bbdev/test_bbdev_vector.c
>>> +++ b/app/test-bbdev/test_bbdev_vector.c
>>> @@ -167,6 +167,8 @@
>>> *op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK;
>>> else if (!strcmp(token, "RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP"))
>>> *op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP;
>>> + else if (!strcmp(token, "RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK"))
>>> + *op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK;
>>> else if (!strcmp(token,
>> "RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS"))
>>> *op_flag_value =
>> RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS;
>>> else if (!strcmp(token,
>> "RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE"))
>>> diff --git a/doc/guides/prog_guide/bbdev.rst
>>> b/doc/guides/prog_guide/bbdev.rst index 9619280..8bd7cba 100644
>>> --- a/doc/guides/prog_guide/bbdev.rst
>>> +++ b/doc/guides/prog_guide/bbdev.rst
>>> @@ -891,6 +891,9 @@ given below.
>>> |RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP |
>>> | Set to drop the last CRC bits decoding output |
>>>
>>> +--------------------------------------------------------------------+
>>> +|RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK |
>>> +| Set for code block CRC-16 checking |
>>> ++--------------------------------------------------------------------+
>>> |RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS |
>>> | Set for bit-level de-interleaver bypass on input stream |
>>>
>>> +--------------------------------------------------------------------+
>>> diff --git a/doc/guides/rel_notes/release_21_11.rst
>>> b/doc/guides/rel_notes/release_21_11.rst
>>> index d707a55..69dd518 100644
>>> --- a/doc/guides/rel_notes/release_21_11.rst
>>> +++ b/doc/guides/rel_notes/release_21_11.rst
>>> @@ -84,6 +84,7 @@ API Changes
>>> Also, make sure to start the actual text at the margin.
>>> =======================================================
>>>
>>> +* bbdev: Added capability related to more comprehensive CRC options.
>>>
>>> ABI Changes
>>> -----------
>>> diff --git a/lib/bbdev/rte_bbdev_op.h b/lib/bbdev/rte_bbdev_op.h index
>>> f946842..7c44ddd 100644
>>> --- a/lib/bbdev/rte_bbdev_op.h
>>> +++ b/lib/bbdev/rte_bbdev_op.h
>>> @@ -142,51 +142,53 @@ enum rte_bbdev_op_ldpcdec_flag_bitmasks {
>>> RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK = (1ULL << 1),
>>> /** Set to drop the last CRC bits decoding output */
>>> RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP = (1ULL << 2),
>>> + /** Set for transport block CRC-16 checking */
>>> + RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK = (1ULL << 3),
>> Changing these enums will break the abi backwards.
>>
>> Why not add the new one at the end ?
> To keep all the CRC related flags next to each other for better readability and logical clarity. The ABI is still marked as experimental.
Ok
>
>> Tom
>>
>>> /** Set for bit-level de-interleaver bypass on Rx stream. */
>>> - RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS = (1ULL << 3),
>>> + RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS = (1ULL << 4),
>>> /** Set for HARQ combined input stream enable. */
>>> - RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE = (1ULL << 4),
>>> + RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE = (1ULL << 5),
>>> /** Set for HARQ combined output stream enable. */
>>> - RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE = (1ULL << 5),
>>> + RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE = (1ULL << 6),
>>> /** Set for LDPC decoder bypass.
>>> * RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE must be set.
>>> */
>>> - RTE_BBDEV_LDPC_DECODE_BYPASS = (1ULL << 6),
>>> + RTE_BBDEV_LDPC_DECODE_BYPASS = (1ULL << 7),
>>> /** Set for soft-output stream enable */
>>> - RTE_BBDEV_LDPC_SOFT_OUT_ENABLE = (1ULL << 7),
>>> + RTE_BBDEV_LDPC_SOFT_OUT_ENABLE = (1ULL << 8),
>>> /** Set for Rate-Matching bypass on soft-out stream. */
>>> - RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS = (1ULL << 8),
>>> + RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS = (1ULL << 9),
>>> /** Set for bit-level de-interleaver bypass on soft-output stream. */
>>> - RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS = (1ULL <<
>> 9),
>>> + RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS = (1ULL <<
>> 10),
>>> /** Set for iteration stopping on successful decode condition
>>> * i.e. a successful syndrome check.
>>> */
>>> - RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE = (1ULL << 10),
>>> + RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE = (1ULL << 11),
>>> /** Set if a device supports decoder dequeue interrupts. */
>>> - RTE_BBDEV_LDPC_DEC_INTERRUPTS = (1ULL << 11),
>>> + RTE_BBDEV_LDPC_DEC_INTERRUPTS = (1ULL << 12),
>>> /** Set if a device supports scatter-gather functionality. */
>>> - RTE_BBDEV_LDPC_DEC_SCATTER_GATHER = (1ULL << 12),
>>> + RTE_BBDEV_LDPC_DEC_SCATTER_GATHER = (1ULL << 13),
>>> /** Set if a device supports input/output HARQ compression. */
>>> - RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION = (1ULL << 13),
>>> + RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION = (1ULL << 14),
>>> /** Set if a device supports input LLR compression. */
>>> - RTE_BBDEV_LDPC_LLR_COMPRESSION = (1ULL << 14),
>>> + RTE_BBDEV_LDPC_LLR_COMPRESSION = (1ULL << 15),
>>> /** Set if a device supports HARQ input from
>>> * device's internal memory.
>>> */
>>> - RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE = (1ULL
>> << 15),
>>> + RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE = (1ULL
>> << 16),
>>> /** Set if a device supports HARQ output to
>>> * device's internal memory.
>>> */
>>> - RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE =
>> (1ULL << 16),
>>> + RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE =
>> (1ULL << 17),
>>> /** Set if a device supports loop-back access to
>>> * HARQ internal memory. Intended for troubleshooting.
>>> */
>>> - RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK = (1ULL
>> << 17),
>>> + RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK = (1ULL
>> << 18),
>>> /** Set if a device includes LLR filler bits in the circular buffer
>>> * for HARQ memory. If not set, it is assumed the filler bits are not
>>> * in HARQ memory and handled directly by the LDPC decoder.
>>> */
>>> - RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS = (1ULL <<
>> 18)
>>> + RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS = (1ULL <<
>> 19)
>>> };
>>>
>>> /** Flags for LDPC encoder operation and capability structure */
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH] cmdline: reduce ABI
@ 2021-09-10 23:16 8% ` Dmitry Kozlyuk
0 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-09-10 23:16 UTC (permalink / raw)
To: dev; +Cc: Dmitry Kozlyuk, Ray Kinsella, Olivier Matz
Remove the definition of `struct cmdline` from public header.
Deprecation notice:
https://mails.dpdk.org/archives/dev/2020-September/183310.html
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
---
I would also hide struct rdline to be able to alter buffer size,
but we don't have a deprecation notice for it.
doc/guides/rel_notes/deprecation.rst | 4 ----
doc/guides/rel_notes/release_21_11.rst | 2 ++
lib/cmdline/cmdline.h | 19 -------------------
lib/cmdline/cmdline_private.h | 8 +++++++-
4 files changed, 9 insertions(+), 24 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b..a404276fa2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -275,10 +275,6 @@ Deprecation Notices
* metrics: The function ``rte_metrics_init`` will have a non-void return
in order to notify errors instead of calling ``rte_exit``.
-* cmdline: ``cmdline`` structure will be made opaque to hide platform-specific
- content. On Linux and FreeBSD, supported prior to DPDK 20.11,
- original structure will be kept until DPDK 21.11.
-
* security: The functions ``rte_security_set_pkt_metadata`` and
``rte_security_get_userdata`` will be made inline functions and additional
flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 675b573834..be73d17ef6 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -91,6 +91,8 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* cmdline: Made ``cmdline`` structure definition hidden on Linux and FreeBSD.
+
ABI Changes
-----------
diff --git a/lib/cmdline/cmdline.h b/lib/cmdline/cmdline.h
index c29762ddae..96674dfda2 100644
--- a/lib/cmdline/cmdline.h
+++ b/lib/cmdline/cmdline.h
@@ -7,10 +7,6 @@
#ifndef _CMDLINE_H_
#define _CMDLINE_H_
-#ifndef RTE_EXEC_ENV_WINDOWS
-#include <termios.h>
-#endif
-
#include <rte_common.h>
#include <rte_compat.h>
@@ -27,23 +23,8 @@
extern "C" {
#endif
-#ifndef RTE_EXEC_ENV_WINDOWS
-
-struct cmdline {
- int s_in;
- int s_out;
- cmdline_parse_ctx_t *ctx;
- struct rdline rdl;
- char prompt[RDLINE_PROMPT_SIZE];
- struct termios oldterm;
-};
-
-#else
-
struct cmdline;
-#endif /* RTE_EXEC_ENV_WINDOWS */
-
struct cmdline *cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out);
void cmdline_set_prompt(struct cmdline *cl, const char *prompt);
void cmdline_free(struct cmdline *cl);
diff --git a/lib/cmdline/cmdline_private.h b/lib/cmdline/cmdline_private.h
index a87c45275c..2e93674c66 100644
--- a/lib/cmdline/cmdline_private.h
+++ b/lib/cmdline/cmdline_private.h
@@ -11,6 +11,8 @@
#include <rte_os_shim.h>
#ifdef RTE_EXEC_ENV_WINDOWS
#include <rte_windows.h>
+#else
+#include <termios.h>
#endif
#include <cmdline.h>
@@ -22,6 +24,7 @@ struct terminal {
int is_console_input;
int is_console_output;
};
+#endif
struct cmdline {
int s_in;
@@ -29,11 +32,14 @@ struct cmdline {
cmdline_parse_ctx_t *ctx;
struct rdline rdl;
char prompt[RDLINE_PROMPT_SIZE];
+#ifdef RTE_EXEC_ENV_WINDOWS
struct terminal oldterm;
char repeated_char;
WORD repeat_count;
-};
+#else
+ struct termios oldterm;
#endif
+};
/* Disable buffering and echoing, save previous settings to oldterm. */
void terminal_adjust(struct cmdline *cl);
--
2.29.3
^ permalink raw reply [relevance 8%]
* [dpdk-dev] [PATCH v7 10/11] doc: changes for new pcapng and dumpcap
2021-09-10 18:18 1% ` [dpdk-dev] [PATCH v7 06/11] pdump: support pcapng and filtering Stephen Hemminger
@ 2021-09-10 18:18 1% ` Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-10 18:18 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Describe the new packet capture library and utilities
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
.../howto/img/packet_capture_framework.svg | 96 +++++++++----------
doc/guides/howto/packet_capture_framework.rst | 67 ++++++-------
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/pcapng_lib.rst | 24 +++++
doc/guides/prog_guide/pdump_lib.rst | 28 ++++--
doc/guides/rel_notes/release_21_11.rst | 10 ++
doc/guides/tools/dumpcap.rst | 86 +++++++++++++++++
doc/guides/tools/index.rst | 1 +
10 files changed, 228 insertions(+), 87 deletions(-)
create mode 100644 doc/guides/prog_guide/pcapng_lib.rst
create mode 100644 doc/guides/tools/dumpcap.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 1992107a0356..ee07394d1c78 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -223,3 +223,4 @@ The public API headers are grouped by topics:
[experimental APIs] (@ref rte_compat.h),
[ABI versioning] (@ref rte_function_versioning.h),
[version] (@ref rte_version.h)
+ [pcapng] (@ref rte_pcapng.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 325a0195c6ab..aba17799a9a1 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -58,6 +58,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/metrics \
@TOPDIR@/lib/node \
@TOPDIR@/lib/net \
+ @TOPDIR@/lib/pcapng \
@TOPDIR@/lib/pci \
@TOPDIR@/lib/pdump \
@TOPDIR@/lib/pipeline \
diff --git a/doc/guides/howto/img/packet_capture_framework.svg b/doc/guides/howto/img/packet_capture_framework.svg
index a76baf71fdee..1c2646a81096 100644
--- a/doc/guides/howto/img/packet_capture_framework.svg
+++ b/doc/guides/howto/img/packet_capture_framework.svg
@@ -1,6 +1,4 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
<svg
xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
xmlns:dc="http://purl.org/dc/elements/1.1/"
@@ -16,8 +14,8 @@
viewBox="0 0 425.19685 283.46457"
id="svg2"
version="1.1"
- inkscape:version="0.91 r13725"
- sodipodi:docname="drawing-pcap.svg">
+ inkscape:version="1.0.2 (e86c870879, 2021-01-15)"
+ sodipodi:docname="packet_capture_framework.svg">
<defs
id="defs4">
<marker
@@ -228,7 +226,7 @@
x2="487.64606"
y2="258.38232"
gradientUnits="userSpaceOnUse"
- gradientTransform="translate(-84.916417,744.90779)" />
+ gradientTransform="matrix(1.1457977,0,0,0.99944907,-151.97019,745.05014)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient5784"
@@ -277,17 +275,18 @@
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
- inkscape:zoom="0.57434918"
- inkscape:cx="215.17857"
- inkscape:cy="285.26445"
+ inkscape:zoom="1"
+ inkscape:cx="226.77165"
+ inkscape:cy="78.124511"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
- inkscape:window-width="1874"
- inkscape:window-height="971"
- inkscape:window-x="2"
- inkscape:window-y="24"
- inkscape:window-maximized="0" />
+ inkscape:window-width="2560"
+ inkscape:window-height="1414"
+ inkscape:window-x="0"
+ inkscape:window-y="0"
+ inkscape:window-maximized="1"
+ inkscape:document-rotation="0" />
<metadata
id="metadata7">
<rdf:RDF>
@@ -296,7 +295,7 @@
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
- <dc:title></dc:title>
+ <dc:title />
</cc:Work>
</rdf:RDF>
</metadata>
@@ -321,15 +320,15 @@
y="790.82452" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="61.050636"
y="807.3205"
- id="text4152"
- sodipodi:linespacing="125%"><tspan
+ id="text4152"><tspan
sodipodi:role="line"
id="tspan4154"
x="61.050636"
- y="807.3205">DPDK Primary Application</tspan></text>
+ y="807.3205"
+ style="font-size:12.5px;line-height:1.25">DPDK Primary Application</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6"
@@ -339,19 +338,20 @@
y="827.01843" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="350.68585"
y="841.16058"
- id="text4189"
- sodipodi:linespacing="125%"><tspan
+ id="text4189"><tspan
sodipodi:role="line"
id="tspan4191"
x="350.68585"
- y="841.16058">dpdk-pdump</tspan><tspan
+ y="841.16058"
+ style="font-size:12.5px;line-height:1.25">dpdk-dumpcap</tspan><tspan
sodipodi:role="line"
x="350.68585"
y="856.78558"
- id="tspan4193">tool</tspan></text>
+ id="tspan4193"
+ style="font-size:12.5px;line-height:1.25">tool</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4"
@@ -361,15 +361,15 @@
y="891.16315" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="352.70612"
y="905.3053"
- id="text4189-1"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1"><tspan
sodipodi:role="line"
x="352.70612"
y="905.3053"
- id="tspan4193-3">PCAP PMD</tspan></text>
+ id="tspan4193-3"
+ style="font-size:12.5px;line-height:1.25">librte_pcapng</tspan></text>
<rect
style="fill:url(#linearGradient5745);fill-opacity:1;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-6"
@@ -379,15 +379,15 @@
y="923.9931" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="136.02846"
y="938.13525"
- id="text4189-0"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-0"><tspan
sodipodi:role="line"
x="136.02846"
y="938.13525"
- id="tspan4193-6">dpdk_port0</tspan></text>
+ id="tspan4193-6"
+ style="font-size:12.5px;line-height:1.25">dpdk_port0</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-5"
@@ -397,33 +397,33 @@
y="824.99817" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="137.54369"
y="839.14026"
- id="text4189-4"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-4"><tspan
sodipodi:role="line"
x="137.54369"
y="839.14026"
- id="tspan4193-2">librte_pdump</tspan></text>
+ id="tspan4193-2"
+ style="font-size:12.5px;line-height:1.25">librte_pdump</tspan></text>
<rect
- style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+ style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1.07013;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4-5"
- width="94.449265"
- height="35.355339"
- x="307.7804"
- y="985.61243" />
+ width="108.21974"
+ height="35.335861"
+ x="297.9809"
+ y="985.62219" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="352.70618"
y="999.75458"
- id="text4189-1-8"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1-8"><tspan
sodipodi:role="line"
x="352.70618"
y="999.75458"
- id="tspan4193-3-2">capture.pcap</tspan></text>
+ id="tspan4193-3-2"
+ style="font-size:12.5px;line-height:1.25">capture.pcapng</tspan></text>
<rect
style="fill:url(#linearGradient5788-1);fill-opacity:1;stroke:#257cdc;stroke-width:1.12555885;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4-5-1"
@@ -433,15 +433,15 @@
y="983.14984" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="136.53352"
y="1002.785"
- id="text4189-1-8-4"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1-8-4"><tspan
sodipodi:role="line"
x="136.53352"
y="1002.785"
- id="tspan4193-3-2-7">Traffic Generator</tspan></text>
+ id="tspan4193-3-2-7"
+ style="font-size:12.5px;line-height:1.25">Traffic Generator</tspan></text>
<path
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker7331)"
d="m 351.46948,927.02357 c 0,57.5787 0,57.5787 0,57.5787"
diff --git a/doc/guides/howto/packet_capture_framework.rst b/doc/guides/howto/packet_capture_framework.rst
index c31bac52340e..78baa609a021 100644
--- a/doc/guides/howto/packet_capture_framework.rst
+++ b/doc/guides/howto/packet_capture_framework.rst
@@ -1,18 +1,19 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright(c) 2017 Intel Corporation.
-DPDK pdump Library and pdump Tool
-=================================
+DPDK packet capture libraries and tools
+=======================================
This document describes how the Data Plane Development Kit (DPDK) Packet
Capture Framework is used for capturing packets on DPDK ports. It is intended
for users of DPDK who want to know more about the Packet Capture feature and
for those who want to monitor traffic on DPDK-controlled devices.
-The DPDK packet capture framework was introduced in DPDK v16.07. The DPDK
-packet capture framework consists of the DPDK pdump library and DPDK pdump
-tool.
-
+The DPDK packet capture framework was introduced in DPDK v16.07 and
+enhanced in 21.1. The DPDK packet capture framework consists of the
+libraries for collecting packets ``librte_pdump`` and writing packets
+to a file ``librte_pcapng``. There are two sample applications:
+``dpdk-dumpcap`` and older ``dpdk-pdump``.
Introduction
------------
@@ -22,43 +23,46 @@ allow users to initialize the packet capture framework and to enable or
disable packet capture. The library works on a multi process communication model and its
usage is recommended for debugging purposes.
-The :ref:`dpdk-pdump <pdump_tool>` tool is developed based on the
-``librte_pdump`` library. It runs as a DPDK secondary process and is capable
-of enabling or disabling packet capture on DPDK ports. The ``dpdk-pdump`` tool
-provides command-line options with which users can request enabling or
-disabling of the packet capture on DPDK ports.
+The :ref:`librte_pcapng <pcapng_library>` library provides the APIs to format
+packets and write them to a file in Pcapng format.
+
+
+The :ref:`dpdk-dumpcap <dumpcap_tool>` is a tool that captures packets in
+like Wireshark dumpcap does for Linux. It runs as a DPDK secondary process and
+captures packets from one or more interfaces and writes them to a file
+in Pcapng format. The ``dpdk-dumpcap`` tool is designed to take
+most of the same options as the Wireshark ``dumpcap`` command.
-The application which initializes the packet capture framework will be a primary process
-and the application that enables or disables the packet capture will
-be a secondary process. The primary process sends the Rx and Tx packets from the DPDK ports
-to the secondary process.
+Without any options it will use the packet capture framework to
+capture traffic from the first available DPDK port.
In DPDK the ``testpmd`` application can be used to initialize the packet
-capture framework and acts as a server, and the ``dpdk-pdump`` tool acts as a
+capture framework and acts as a server, and the ``dpdk-dumpcap`` tool acts as a
client. To view Rx or Tx packets of ``testpmd``, the application should be
-launched first, and then the ``dpdk-pdump`` tool. Packets from ``testpmd``
-will be sent to the tool, which then sends them on to the Pcap PMD device and
-that device writes them to the Pcap file or to an external interface depending
-on the command-line option used.
+launched first, and then the ``dpdk-dumpcap`` tool. Packets from ``testpmd``
+will be sent to the tool, and then to the Pcapng file.
Some things to note:
-* The ``dpdk-pdump`` tool can only be used in conjunction with a primary
+* All tools using ``librte_pdump`` can only be used in conjunction with a primary
application which has the packet capture framework initialized already. In
dpdk, only ``testpmd`` is modified to initialize packet capture framework,
- other applications remain untouched. So, if the ``dpdk-pdump`` tool has to
+ other applications remain untouched. So, if the ``dpdk-dumpcap`` tool has to
be used with any application other than the testpmd, the user needs to
explicitly modify that application to call the packet capture framework
initialization code. Refer to the ``app/test-pmd/testpmd.c`` code and look
for ``pdump`` keyword to see how this is done.
-* The ``dpdk-pdump`` tool depends on the libpcap based PMD.
+* The ``dpdk-pdump`` tool is an older tool created as demonstration of ``librte_pdump``
+ library. The ``dpdk-pdump`` tool provides more limited functionality and
+ and depends on the Pcap PMD. It is retained only for compatibility reasons;
+ users should use ``dpdk-dumpcap`` instead.
Test Environment
----------------
-The overview of using the Packet Capture Framework and the ``dpdk-pdump`` tool
+The overview of using the Packet Capture Framework and the ``dpdk-dumpcap`` utility
for packet capturing on the DPDK port in
:numref:`figure_packet_capture_framework`.
@@ -66,13 +70,13 @@ for packet capturing on the DPDK port in
.. figure:: img/packet_capture_framework.*
- Packet capturing on a DPDK port using the dpdk-pdump tool.
+ Packet capturing on a DPDK port using the dpdk-dumpcap utility.
Running the Application
-----------------------
-The following steps demonstrate how to run the ``dpdk-pdump`` tool to capture
+The following steps demonstrate how to run the ``dpdk-dumpcap`` tool to capture
Rx side packets on dpdk_port0 in :numref:`figure_packet_capture_framework` and
inspect them using ``tcpdump``.
@@ -80,16 +84,15 @@ inspect them using ``tcpdump``.
sudo <build_dir>/app/dpdk-testpmd -c 0xf0 -n 4 -- -i --port-topology=chained
-#. Launch the pdump tool as follows::
+#. Launch the dpdk-dump as follows::
- sudo <build_dir>/app/dpdk-pdump -- \
- --pdump 'port=0,queue=*,rx-dev=/tmp/capture.pcap'
+ sudo <build_dir>/app/dpdk-dumpcap -w /tmp/capture.pcapng
#. Send traffic to dpdk_port0 from traffic generator.
- Inspect packets captured in the file capture.pcap using a tool
- that can interpret Pcap files, for example tcpdump::
+ Inspect packets captured in the file capture.pcap using a tool such as
+ tcpdump or tshark that can interpret Pcapng files::
- $tcpdump -nr /tmp/capture.pcap
+ $ tcpdump -nr /tmp/capture.pcapng
reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)
11:11:36.891404 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
11:11:36.891442 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 2dce507f46a3..b440c77c2ba1 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -43,6 +43,7 @@ Programmer's Guide
ip_fragment_reassembly_lib
generic_receive_offload_lib
generic_segmentation_offload_lib
+ pcapng_lib
pdump_lib
multi_proc_support
kernel_nic_interface
diff --git a/doc/guides/prog_guide/pcapng_lib.rst b/doc/guides/prog_guide/pcapng_lib.rst
new file mode 100644
index 000000000000..36379b530a57
--- /dev/null
+++ b/doc/guides/prog_guide/pcapng_lib.rst
@@ -0,0 +1,24 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2016 Intel Corporation.
+
+.. _pcapng_library:
+
+Packet Capture File Writer
+==========================
+
+Pcapng is a library for creating files in Pcapng file format.
+The Pcapng file format is the default capture file format for modern
+network capture processing tools. It can be read by wireshark and tcpdump.
+
+Usage
+-----
+
+Before the library can be used the function ``rte_pcapng_init``
+should be called once to initialize timestamp computation.
+
+
+References
+----------
+* Draft RFC https://www.ietf.org/id/draft-tuexen-opsawg-pcapng-03.html
+
+* Project repository https://github.com/pcapng/pcapng/
diff --git a/doc/guides/prog_guide/pdump_lib.rst b/doc/guides/prog_guide/pdump_lib.rst
index 62c0b015b2fe..9af91415e5ea 100644
--- a/doc/guides/prog_guide/pdump_lib.rst
+++ b/doc/guides/prog_guide/pdump_lib.rst
@@ -3,10 +3,10 @@
.. _pdump_library:
-The librte_pdump Library
-========================
+The Packet Capture Library
+==========================
-The ``librte_pdump`` library provides a framework for packet capturing in DPDK.
+The DPDK ``pdump`` library provides a framework for packet capturing in DPDK.
The library does the complete copy of the Rx and Tx mbufs to a new mempool and
hence it slows down the performance of the applications, so it is recommended
to use this library for debugging purposes.
@@ -23,11 +23,19 @@ or disable the packet capture, and to uninitialize it.
* ``rte_pdump_enable()``:
This API enables the packet capture on a given port and queue.
- Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf()``
+ This API enables the packet capture on a given port and queue.
+ It also allows setting an optional filter using DPDK BPF interpreter and
+ setting the captured packet length.
* ``rte_pdump_enable_by_deviceid()``:
This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
- Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf_by_deviceid()``
+ This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
+ It also allows seating an optional filter using DPDK BPF interpreter and
+ setting the captured packet length.
* ``rte_pdump_disable()``:
This API disables the packet capture on a given port and queue.
@@ -61,6 +69,12 @@ and enables the packet capture by registering the Ethernet RX and TX callbacks f
and queue combinations. Then the primary process will mirror the packets to the new mempool and enqueue them to
the rte_ring that secondary process have passed to these APIs.
+The packet ring supports one of two formats. The default format enqueues copies of the original packets
+into the rte_ring. If the ``RTE_PDUMP_FLAG_PCAPNG`` is set the mbuf data is extended with header and trailer
+to match the format of Pcapng enhanced packet block. The enhanced packet block has meta-data such as the
+timestamp, port and queue the packet was captured on. It is up to the application consuming the
+packets from the ring to select the format desired.
+
The library APIs ``rte_pdump_disable()`` and ``rte_pdump_disable_by_deviceid()`` disables the packet capture.
For the calls to these APIs from secondary process, the library creates the "pdump disable" request and sends
the request to the primary process over the multi process channel. The primary process takes this request and
@@ -74,5 +88,5 @@ function.
Use Case: Packet Capturing
--------------------------
-The DPDK ``app/pdump`` tool is developed based on this library to capture packets in DPDK.
-Users can use this as an example to develop their own packet capturing tools.
+The DPDK ``app/dpdk-dumpcap`` utility uses this library
+to capture packets in DPDK.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 675b5738348b..ee24cbfdb99d 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -62,6 +62,16 @@ New Features
* Added bus-level parsing of the devargs syntax.
* Kept compatibility with the legacy syntax as parsing fallback.
+* **Enhance Packet capture.**
+
+ * New dpdk-dumpcap program that has most of the features of the
+ wireshark dumpcap utility including capture of multiple interfaces,
+ stopping after number of bytes, packets.
+ * New library for writing pcapng packet capture files.
+ * Enhancement to the pdump library to support:
+ * Packet filter with BPF.
+ * Pcapng format with timestamps and meta-data.
+ * Fixes packet capture with stripped VLAN tags.
Removed Items
-------------
diff --git a/doc/guides/tools/dumpcap.rst b/doc/guides/tools/dumpcap.rst
new file mode 100644
index 000000000000..664ea0c79802
--- /dev/null
+++ b/doc/guides/tools/dumpcap.rst
@@ -0,0 +1,86 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2020 Microsoft Corporation.
+
+.. _dumpcap_tool:
+
+dpdk-dumpcap Application
+========================
+
+The ``dpdk-dumpcap`` tool is a Data Plane Development Kit (DPDK)
+network traffic dump tool. The interface is similar to the dumpcap tool in Wireshark.
+It runs as a secondary DPDK process and lets you capture packets that are
+coming into and out of a DPDK primary process.
+The ``dpdk-dumpcap`` writes files in Pcapng packet format using
+capture file format is pcapng.
+
+Without any options set it will use DPDK to capture traffic from the first
+available DPDK interface and write the received raw packet data, along
+with timestamps into a pcapng file.
+
+If the ``-w`` option is not specified, ``dpdk-dumpcap`` writes to a newly
+create file with a name chosen based on interface name and timestamp.
+If ``-w`` option is specified, then that file is used.
+
+ .. Note::
+ * The ``dpdk-dumpcap`` tool can only be used in conjunction with a primary
+ application which has the packet capture framework initialized already.
+ In dpdk, only the ``testpmd`` is modified to initialize packet capture
+ framework, other applications remain untouched. So, if the ``dpdk-dumpcap``
+ tool has to be used with any application other than the testpmd, user
+ needs to explicitly modify that application to call packet capture
+ framework initialization code. Refer ``app/test-pmd/testpmd.c``
+ code to see how this is done.
+
+ * The ``dpdk-dumpcap`` tool runs as a DPDK secondary process. It exits when
+ the primary application exits.
+
+
+Running the Application
+-----------------------
+
+To list interfaces available for capture use ``--list-interfaces``.
+
+To filter packets in style of *tshark* use the ``-f`` flag.
+
+To capture on multiple interfaces at once, use multiple ``-I`` flags.
+
+Example
+-------
+
+.. code-block:: console
+
+ # ./<build_dir>/app/dpdk-dumpcap --list-interfaces
+ 0. 000:00:03.0
+ 1. 000:00:03.1
+
+ # ./<build_dir>/app/dpdk-dumpcap -I 0000:00:03.0 -c 6 -w /tmp/sample.pcapng
+ Packets captured: 6
+ Packets received/dropped on interface '0000:00:03.0' 6/0
+
+ # ./<build_dir>/app/dpdk-dumpcap -f 'tcp port 80'
+ Packets captured: 6
+ Packets received/dropped on interface '0000:00:03.0' 10/8
+
+
+Limitations
+-----------
+The following option of Wireshark ``dumpcap`` is not yet implemented:
+
+ * ``-b|--ring-buffer`` -- more complex file management.
+
+The following options do not make sense in the context of DPDK.
+
+ * ``-C <byte_limit>`` -- its a kernel thing
+
+ * ``-t`` -- use a thread per interface
+
+ * Timestamp type.
+
+ * Link data types. Only EN10MB (Ethernet) is supported.
+
+ * Wireless related options: ``-I|--monitor-mode`` and ``-k <freq>``
+
+
+.. Note::
+ * The options to ``dpdk-dumpcap`` are like the Wireshark dumpcap program and
+ are not the same as ``dpdk-pdump`` and other DPDK applications.
diff --git a/doc/guides/tools/index.rst b/doc/guides/tools/index.rst
index 93dde4148e90..b71c12b8f2dd 100644
--- a/doc/guides/tools/index.rst
+++ b/doc/guides/tools/index.rst
@@ -8,6 +8,7 @@ DPDK Tools User Guides
:maxdepth: 2
:numbered:
+ dumpcap
proc_info
pdump
pmdinfo
--
2.30.2
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v7 06/11] pdump: support pcapng and filtering
@ 2021-09-10 18:18 1% ` Stephen Hemminger
2021-09-10 18:18 1% ` [dpdk-dev] [PATCH v7 10/11] doc: changes for new pcapng and dumpcap Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-10 18:18 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
This enhances the DPDK pdump library to support new
pcapng format and filtering via BPF.
The internal client/server protocol is changed to support
two versions: the original pdump basic version and a
new pcapng version.
The internal version number (not part of exposed API or ABI)
is intentionally increased to cause any attempt to try
mismatched primary/secondary process to fail.
Add new API to do allow filtering of captured packets with
DPDK BPF (eBPF) filter program. It keeps statistics
on packets captured, filtered, and missed (because ring was full).
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
lib/meson.build | 4 +-
lib/pdump/meson.build | 2 +-
lib/pdump/rte_pdump.c | 437 ++++++++++++++++++++++++++++++------------
lib/pdump/rte_pdump.h | 110 ++++++++++-
lib/pdump/version.map | 8 +
5 files changed, 435 insertions(+), 126 deletions(-)
diff --git a/lib/meson.build b/lib/meson.build
index ba88e9eabc58..1da521ea6185 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -26,6 +26,7 @@ libraries = [
'timer', # eventdev depends on this
'acl',
'bbdev',
+ 'bpf',
'bitratestats',
'cfgfile',
'compressdev',
@@ -43,7 +44,6 @@ libraries = [
'member',
'pcapng',
'power',
- 'pdump',
'rawdev',
'regexdev',
'rib',
@@ -55,10 +55,10 @@ libraries = [
'ipsec', # ipsec lib depends on net, crypto and security
'fib', #fib lib depends on rib
'port', # pkt framework libs which use other libs from above
+ 'pdump', # pdump lib depends on bpf pcapng
'table',
'pipeline',
'flow_classify', # flow_classify lib depends on pkt framework table lib
- 'bpf',
'graph',
'node',
]
diff --git a/lib/pdump/meson.build b/lib/pdump/meson.build
index 3a95eabde6a6..51ceb2afdec5 100644
--- a/lib/pdump/meson.build
+++ b/lib/pdump/meson.build
@@ -3,4 +3,4 @@
sources = files('rte_pdump.c')
headers = files('rte_pdump.h')
-deps += ['ethdev']
+deps += ['ethdev', 'bpf', 'pcapng']
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 382217bc1564..f2047ad9f001 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -7,8 +7,10 @@
#include <rte_ethdev.h>
#include <rte_lcore.h>
#include <rte_log.h>
+#include <rte_memzone.h>
#include <rte_errno.h>
#include <rte_string_fns.h>
+#include <rte_pcapng.h>
#include "rte_pdump.h"
@@ -27,30 +29,26 @@ enum pdump_operation {
ENABLE = 2
};
+/*
+ * Note: version numbers intentionally start at 3
+ * in order to catch any application built with older out
+ * version of DPDK using incompatible client request format.
+ */
enum pdump_version {
- V1 = 1
+ PDUMP_CLIENT_LEGACY = 3,
+ PDUMP_CLIENT_PCAPNG = 4,
};
struct pdump_request {
uint16_t ver;
uint16_t op;
- uint32_t flags;
- union pdump_data {
- struct enable_v1 {
- char device[RTE_DEV_NAME_MAX_LEN];
- uint16_t queue;
- struct rte_ring *ring;
- struct rte_mempool *mp;
- void *filter;
- } en_v1;
- struct disable_v1 {
- char device[RTE_DEV_NAME_MAX_LEN];
- uint16_t queue;
- struct rte_ring *ring;
- struct rte_mempool *mp;
- void *filter;
- } dis_v1;
- } data;
+ uint16_t flags;
+ uint16_t queue;
+ struct rte_ring *ring;
+ struct rte_mempool *mp;
+ const struct rte_bpf_prm *prm;
+ uint32_t snaplen;
+ char device[RTE_DEV_NAME_MAX_LEN];
};
struct pdump_response {
@@ -63,80 +61,140 @@ static struct pdump_rxtx_cbs {
struct rte_ring *ring;
struct rte_mempool *mp;
const struct rte_eth_rxtx_callback *cb;
- void *filter;
+ const struct rte_bpf *filter;
+ enum pdump_version ver;
+ uint32_t snaplen;
} rx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
tx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
-
-static inline void
-pdump_copy(struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
+static const char *MZ_RTE_PDUMP_STATS = "rte_pdump_stats";
+
+/* Shared memory between primary and secondary processes. */
+static struct {
+ struct rte_pdump_stats rx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+ struct rte_pdump_stats tx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+} *pdump_stats;
+
+/* Create a clone of mbuf to be placed into ring. */
+static void
+pdump_copy(uint16_t port_id, uint16_t queue,
+ enum rte_pcapng_direction direction,
+ struct rte_mbuf **pkts, uint16_t nb_pkts,
+ const struct pdump_rxtx_cbs *cbs,
+ struct rte_pdump_stats *stats)
{
unsigned int i;
int ring_enq;
uint16_t d_pkts = 0;
struct rte_mbuf *dup_bufs[nb_pkts];
- struct pdump_rxtx_cbs *cbs;
+ uint64_t ts;
struct rte_ring *ring;
struct rte_mempool *mp;
struct rte_mbuf *p;
+ uint64_t rcs[nb_pkts];
+
+ if (cbs->filter &&
+ rte_bpf_exec_burst(cbs->filter, (void **)pkts, rcs, nb_pkts) == 0) {
+ /* All packets were filtered out */
+ __atomic_fetch_add(&stats->filtered, nb_pkts,
+ __ATOMIC_RELAXED);
+ return;
+ }
- cbs = user_params;
+ ts = rte_get_tsc_cycles();
ring = cbs->ring;
mp = cbs->mp;
for (i = 0; i < nb_pkts; i++) {
- p = rte_pktmbuf_copy(pkts[i], mp, 0, UINT32_MAX);
- if (p)
+ /*
+ * Similar behavior to rte_bpf_eth callback.
+ * if BPF program returns zero value for a given packet,
+ * then it will be ignored.
+ */
+ if (cbs->filter && rcs[i] == 0) {
+ __atomic_fetch_add(&stats->filtered,
+ 1, __ATOMIC_RELAXED);
+ continue;
+ }
+
+ /*
+ * If using pcapng then want to wrap packets
+ * otherwise a simple copy.
+ */
+ if (cbs->ver == PDUMP_CLIENT_PCAPNG)
+ p = rte_pcapng_copy(port_id, queue,
+ pkts[i], mp, cbs->snaplen,
+ ts, direction);
+ else
+ p = rte_pktmbuf_copy(pkts[i], mp, 0, cbs->snaplen);
+
+ if (unlikely(p == NULL))
+ __atomic_fetch_add(&stats->nombuf, 1, __ATOMIC_RELAXED);
+ else
dup_bufs[d_pkts++] = p;
}
+ __atomic_fetch_add(&stats->accepted, d_pkts, __ATOMIC_RELAXED);
+
ring_enq = rte_ring_enqueue_burst(ring, (void *)dup_bufs, d_pkts, NULL);
if (unlikely(ring_enq < d_pkts)) {
unsigned int drops = d_pkts - ring_enq;
- PDUMP_LOG(DEBUG,
- "only %d of packets enqueued to ring\n", ring_enq);
+ __atomic_fetch_add(&stats->ringfull, drops, __ATOMIC_RELAXED);
rte_pktmbuf_free_bulk(&dup_bufs[ring_enq], drops);
}
}
static uint16_t
-pdump_rx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_rx(uint16_t port, uint16_t queue,
struct rte_mbuf **pkts, uint16_t nb_pkts,
- uint16_t max_pkts __rte_unused,
- void *user_params)
+ uint16_t max_pkts __rte_unused, void *user_params)
{
- pdump_copy(pkts, nb_pkts, user_params);
+ const struct pdump_rxtx_cbs *cbs = user_params;
+ struct rte_pdump_stats *stats = &pdump_stats->rx[port][queue];
+
+ pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_IN,
+ pkts, nb_pkts, cbs, stats);
return nb_pkts;
}
static uint16_t
-pdump_tx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_tx(uint16_t port, uint16_t queue,
struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
{
- pdump_copy(pkts, nb_pkts, user_params);
+ const struct pdump_rxtx_cbs *cbs = user_params;
+ struct rte_pdump_stats *stats = &pdump_stats->tx[port][queue];
+
+ pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_OUT,
+ pkts, nb_pkts, cbs, stats);
return nb_pkts;
}
static int
-pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
- struct rte_ring *ring, struct rte_mempool *mp,
- uint16_t operation)
+pdump_register_rx_callbacks(enum pdump_version ver,
+ uint16_t end_q, uint16_t port, uint16_t queue,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ struct rte_bpf *filter,
+ uint16_t operation, uint32_t snaplen)
{
uint16_t qid;
- struct pdump_rxtx_cbs *cbs = NULL;
qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
for (; qid < end_q; qid++) {
- cbs = &rx_cbs[port][qid];
- if (cbs && operation == ENABLE) {
+ struct pdump_rxtx_cbs *cbs = &rx_cbs[port][qid];
+
+ if (operation == ENABLE) {
if (cbs->cb) {
PDUMP_LOG(ERR,
"rx callback for port=%d queue=%d, already exists\n",
port, qid);
return -EEXIST;
}
+ cbs->ver = ver;
cbs->ring = ring;
cbs->mp = mp;
+ cbs->snaplen = snaplen;
+ cbs->filter = filter;
+
cbs->cb = rte_eth_add_first_rx_callback(port, qid,
pdump_rx, cbs);
if (cbs->cb == NULL) {
@@ -145,8 +203,7 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
rte_errno);
return rte_errno;
}
- }
- if (cbs && operation == DISABLE) {
+ } else if (operation == DISABLE) {
int ret;
if (cbs->cb == NULL) {
@@ -170,26 +227,32 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
}
static int
-pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
- struct rte_ring *ring, struct rte_mempool *mp,
- uint16_t operation)
+pdump_register_tx_callbacks(enum pdump_version ver,
+ uint16_t end_q, uint16_t port, uint16_t queue,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ struct rte_bpf *filter,
+ uint16_t operation, uint32_t snaplen)
{
uint16_t qid;
- struct pdump_rxtx_cbs *cbs = NULL;
qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
for (; qid < end_q; qid++) {
- cbs = &tx_cbs[port][qid];
- if (cbs && operation == ENABLE) {
+ struct pdump_rxtx_cbs *cbs = &tx_cbs[port][qid];
+
+ if (operation == ENABLE) {
if (cbs->cb) {
PDUMP_LOG(ERR,
"tx callback for port=%d queue=%d, already exists\n",
port, qid);
return -EEXIST;
}
+ cbs->ver = ver;
cbs->ring = ring;
cbs->mp = mp;
+ cbs->snaplen = snaplen;
+ cbs->filter = filter;
+
cbs->cb = rte_eth_add_tx_callback(port, qid, pdump_tx,
cbs);
if (cbs->cb == NULL) {
@@ -198,8 +261,7 @@ pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
rte_errno);
return rte_errno;
}
- }
- if (cbs && operation == DISABLE) {
+ } else if (operation == DISABLE) {
int ret;
if (cbs->cb == NULL) {
@@ -228,37 +290,47 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
uint16_t nb_rx_q = 0, nb_tx_q = 0, end_q, queue;
uint16_t port;
int ret = 0;
+ struct rte_bpf *filter = NULL;
uint32_t flags;
uint16_t operation;
struct rte_ring *ring;
struct rte_mempool *mp;
- flags = p->flags;
- operation = p->op;
- if (operation == ENABLE) {
- ret = rte_eth_dev_get_port_by_name(p->data.en_v1.device,
- &port);
- if (ret < 0) {
+ if (!(p->ver == PDUMP_CLIENT_LEGACY ||
+ p->ver == PDUMP_CLIENT_PCAPNG)) {
+ PDUMP_LOG(ERR,
+ "incorrect client version %u\n", p->ver);
+ return -EINVAL;
+ }
+
+ if (p->prm) {
+ if (p->prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF) {
PDUMP_LOG(ERR,
- "failed to get port id for device id=%s\n",
- p->data.en_v1.device);
+ "invalid BPF program type: %u\n",
+ p->prm->prog_arg.type);
return -EINVAL;
}
- queue = p->data.en_v1.queue;
- ring = p->data.en_v1.ring;
- mp = p->data.en_v1.mp;
- } else {
- ret = rte_eth_dev_get_port_by_name(p->data.dis_v1.device,
- &port);
- if (ret < 0) {
- PDUMP_LOG(ERR,
- "failed to get port id for device id=%s\n",
- p->data.dis_v1.device);
- return -EINVAL;
+
+ filter = rte_bpf_load(p->prm);
+ if (filter == NULL) {
+ PDUMP_LOG(ERR, "cannot load BPF filter: %s\n",
+ rte_strerror(rte_errno));
+ return -rte_errno;
}
- queue = p->data.dis_v1.queue;
- ring = p->data.dis_v1.ring;
- mp = p->data.dis_v1.mp;
+ }
+
+ flags = p->flags;
+ operation = p->op;
+ queue = p->queue;
+ ring = p->ring;
+ mp = p->mp;
+
+ ret = rte_eth_dev_get_port_by_name(p->device, &port);
+ if (ret < 0) {
+ PDUMP_LOG(ERR,
+ "failed to get port id for device id=%s\n",
+ p->device);
+ return -EINVAL;
}
/* validation if packet capture is for all queues */
@@ -296,8 +368,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
/* register RX callback */
if (flags & RTE_PDUMP_FLAG_RX) {
end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_rx_q : queue + 1;
- ret = pdump_register_rx_callbacks(end_q, port, queue, ring, mp,
- operation);
+ ret = pdump_register_rx_callbacks(p->ver, end_q, port, queue,
+ ring, mp, filter,
+ operation, p->snaplen);
if (ret < 0)
return ret;
}
@@ -305,8 +378,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
/* register TX callback */
if (flags & RTE_PDUMP_FLAG_TX) {
end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_tx_q : queue + 1;
- ret = pdump_register_tx_callbacks(end_q, port, queue, ring, mp,
- operation);
+ ret = pdump_register_tx_callbacks(p->ver, end_q, port, queue,
+ ring, mp, filter,
+ operation, p->snaplen);
if (ret < 0)
return ret;
}
@@ -347,8 +421,18 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
int
rte_pdump_init(void)
{
+ const struct rte_memzone *mz;
int ret;
+ mz = rte_memzone_reserve(MZ_RTE_PDUMP_STATS, sizeof(*pdump_stats),
+ rte_socket_id(), 0);
+ if (mz == NULL) {
+ PDUMP_LOG(ERR, "cannot allocate pdump statistics\n");
+ rte_errno = ENOMEM;
+ return -1;
+ }
+ pdump_stats = mz->addr;
+
ret = rte_mp_action_register(PDUMP_MP, pdump_server);
if (ret && rte_errno != ENOTSUP)
return -1;
@@ -392,14 +476,21 @@ pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp)
static int
pdump_validate_flags(uint32_t flags)
{
- if (flags != RTE_PDUMP_FLAG_RX && flags != RTE_PDUMP_FLAG_TX &&
- flags != RTE_PDUMP_FLAG_RXTX) {
+ if ((flags & RTE_PDUMP_FLAG_RXTX) == 0) {
PDUMP_LOG(ERR,
"invalid flags, should be either rx/tx/rxtx\n");
rte_errno = EINVAL;
return -1;
}
+ /* mask off the flags we know about */
+ if (flags & ~(RTE_PDUMP_FLAG_RXTX | RTE_PDUMP_FLAG_PCAPNG)) {
+ PDUMP_LOG(ERR,
+ "unknown flags: %#x\n", flags);
+ rte_errno = ENOTSUP;
+ return -1;
+ }
+
return 0;
}
@@ -426,12 +517,12 @@ pdump_validate_port(uint16_t port, char *name)
}
static int
-pdump_prepare_client_request(char *device, uint16_t queue,
- uint32_t flags,
- uint16_t operation,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+pdump_prepare_client_request(const char *device, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ uint16_t operation,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
{
int ret = -1;
struct rte_mp_msg mp_req, *mp_rep;
@@ -440,23 +531,23 @@ pdump_prepare_client_request(char *device, uint16_t queue,
struct pdump_request *req = (struct pdump_request *)mp_req.param;
struct pdump_response *resp;
- req->ver = 1;
- req->flags = flags;
+ memset(req, 0, sizeof(*req));
+ if (flags & RTE_PDUMP_FLAG_PCAPNG)
+ req->ver = PDUMP_CLIENT_PCAPNG;
+ else
+ req->ver = PDUMP_CLIENT_LEGACY;
+
+ req->flags = flags & RTE_PDUMP_FLAG_RXTX;
req->op = operation;
+ req->queue = queue;
+ strlcpy(req->device, device, sizeof(req->device));
+
if ((operation & ENABLE) != 0) {
- strlcpy(req->data.en_v1.device, device,
- sizeof(req->data.en_v1.device));
- req->data.en_v1.queue = queue;
- req->data.en_v1.ring = ring;
- req->data.en_v1.mp = mp;
- req->data.en_v1.filter = filter;
- } else {
- strlcpy(req->data.dis_v1.device, device,
- sizeof(req->data.dis_v1.device));
- req->data.dis_v1.queue = queue;
- req->data.dis_v1.ring = NULL;
- req->data.dis_v1.mp = NULL;
- req->data.dis_v1.filter = NULL;
+ req->queue = queue;
+ req->ring = ring;
+ req->mp = mp;
+ req->prm = prm;
+ req->snaplen = snaplen;
}
strlcpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
@@ -477,11 +568,17 @@ pdump_prepare_client_request(char *device, uint16_t queue,
return ret;
}
-int
-rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+/*
+ * There are two versions of this function, because although original API
+ * left place holder for future filter, it never checked the value.
+ * Therefore the API can't depend on application passing a non
+ * bogus value.
+ */
+static int
+pdump_enable(uint16_t port, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
{
int ret;
char name[RTE_DEV_NAME_MAX_LEN];
@@ -496,20 +593,42 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(name, queue, flags,
- ENABLE, ring, mp, filter);
+ if (snaplen == 0)
+ snaplen = UINT32_MAX;
- return ret;
+ return pdump_prepare_client_request(name, queue, flags, snaplen,
+ ENABLE, ring, mp, prm);
}
int
-rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
- uint32_t flags,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ void *filter __rte_unused)
{
- int ret = 0;
+ return pdump_enable(port, queue, flags, 0,
+ ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf(uint16_t port, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ return pdump_enable(port, queue, flags, snaplen,
+ ring, mp, prm);
+}
+
+static int
+pdump_enable_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ int ret;
ret = pdump_validate_ring_mp(ring, mp);
if (ret < 0)
@@ -518,10 +637,30 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(device_id, queue, flags,
- ENABLE, ring, mp, filter);
+ return pdump_prepare_client_request(device_id, queue, flags, snaplen,
+ ENABLE, ring, mp, prm);
+}
- return ret;
+int
+rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
+ uint32_t flags,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ void *filter __rte_unused)
+{
+ return pdump_enable_by_deviceid(device_id, queue, flags, 0,
+ ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ return pdump_enable_by_deviceid(device_id, queue, flags, snaplen,
+ ring, mp, prm);
}
int
@@ -537,8 +676,8 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags)
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(name, queue, flags,
- DISABLE, NULL, NULL, NULL);
+ ret = pdump_prepare_client_request(name, queue, flags, 0,
+ DISABLE, NULL, NULL, NULL);
return ret;
}
@@ -553,8 +692,66 @@ rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(device_id, queue, flags,
- DISABLE, NULL, NULL, NULL);
+ ret = pdump_prepare_client_request(device_id, queue, flags, 0,
+ DISABLE, NULL, NULL, NULL);
return ret;
}
+
+static void
+pdump_sum_stats(uint16_t port, uint16_t nq,
+ struct rte_pdump_stats stats[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
+ struct rte_pdump_stats *total)
+{
+ uint64_t *sum = (uint64_t *)total;
+ unsigned int i;
+ uint64_t val;
+ uint16_t qid;
+
+ for (qid = 0; qid < nq; qid++) {
+ const uint64_t *perq = (const uint64_t *)&stats[port][qid];
+
+ for (i = 0; i < sizeof(*total) / sizeof(uint64_t); i++) {
+ val = __atomic_load_n(&perq[i], __ATOMIC_RELAXED);
+ sum[i] += val;
+ }
+ }
+}
+
+int
+rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats)
+{
+ struct rte_eth_dev_info dev_info;
+ const struct rte_memzone *mz;
+ int ret;
+
+ memset(stats, 0, sizeof(*stats));
+ ret = rte_eth_dev_info_get(port, &dev_info);
+ if (ret != 0) {
+ PDUMP_LOG(ERR,
+ "Error during getting device (port %u) info: %s\n",
+ port, strerror(-ret));
+ return ret;
+ }
+
+ if (pdump_stats == NULL) {
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ PDUMP_LOG(ERR,
+ "pdump not initialized\n");
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ mz = rte_memzone_lookup(MZ_RTE_PDUMP_STATS);
+ if (mz == NULL) {
+ PDUMP_LOG(ERR, "can not find pdump stats\n");
+ rte_errno = EINVAL;
+ return -1;
+ }
+ pdump_stats = mz->addr;
+ }
+
+ pdump_sum_stats(port, dev_info.nb_rx_queues, pdump_stats->rx, stats);
+ pdump_sum_stats(port, dev_info.nb_tx_queues, pdump_stats->tx, stats);
+ return 0;
+}
diff --git a/lib/pdump/rte_pdump.h b/lib/pdump/rte_pdump.h
index 6b00fc17aeb2..be3fd14c4bd3 100644
--- a/lib/pdump/rte_pdump.h
+++ b/lib/pdump/rte_pdump.h
@@ -15,6 +15,7 @@
#include <stdint.h>
#include <rte_mempool.h>
#include <rte_ring.h>
+#include <rte_bpf.h>
#ifdef __cplusplus
extern "C" {
@@ -26,7 +27,9 @@ enum {
RTE_PDUMP_FLAG_RX = 1, /* receive direction */
RTE_PDUMP_FLAG_TX = 2, /* transmit direction */
/* both receive and transmit directions */
- RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX)
+ RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX),
+
+ RTE_PDUMP_FLAG_PCAPNG = 4, /* format for pcapng */
};
/**
@@ -68,7 +71,7 @@ rte_pdump_uninit(void);
* @param mp
* mempool on to which original packets will be mirrored or duplicated.
* @param filter
- * place holder for packet filtering.
+ * Unused should be NULL.
*
* @return
* 0 on success, -1 on error, rte_errno is set accordingly.
@@ -80,6 +83,41 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
struct rte_mempool *mp,
void *filter);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given port and queue with filtering.
+ *
+ * @param port_id
+ * The Ethernet port on which packet capturing should be enabled.
+ * @param queue
+ * The queue on the Ethernet port which packet capturing
+ * should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ * queues of a given port.
+ * @param flags
+ * Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ * The upper limit on bytes to copy.
+ * Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ * The ring on which captured packets will be enqueued for user.
+ * @param mp
+ * The mempool on to which original packets will be mirrored or duplicated.
+ * @param prm
+ * Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ * 0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf(uint16_t port_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm);
+
/**
* Disables packet capturing on given port and queue.
*
@@ -118,7 +156,7 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags);
* @param mp
* mempool on to which original packets will be mirrored or duplicated.
* @param filter
- * place holder for packet filtering.
+ * unused should be NULL
*
* @return
* 0 on success, -1 on error, rte_errno is set accordingly.
@@ -131,6 +169,43 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
struct rte_mempool *mp,
void *filter);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given device id and queue with filtering.
+ * device_id can be name or pci address of device.
+ *
+ * @param device_id
+ * device id on which packet capturing should be enabled.
+ * @param queue
+ * The queue on the Ethernet port which packet capturing
+ * should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ * queues of a given port.
+ * @param flags
+ * Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ * The upper limit on bytes to copy.
+ * Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ * The ring on which captured packets will be enqueued for user.
+ * @param mp
+ * The mempool on to which original packets will be mirrored or duplicated.
+ * @param filter
+ * Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ * 0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *filter);
+
+
/**
* Disables packet capturing on given device_id and queue.
* device_id can be name or pci address of device.
@@ -153,6 +228,35 @@ int
rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
uint32_t flags);
+
+/**
+ * A structure used to retrieve statistics from packet capture.
+ * The statistics are sum of both receive and transmit queues.
+ */
+struct rte_pdump_stats {
+ uint64_t accepted; /**< Number of packets accepted by filter. */
+ uint64_t filtered; /**< Number of packets rejected by filter. */
+ uint64_t nombuf; /**< Number of mbuf allocation failures. */
+ uint64_t ringfull; /**< Number of missed packets due to ring full. */
+
+ uint64_t reserved[4]; /**< Reserved and pad to cache line */
+};
+
+/**
+ * Retrieve the packet capture statistics for a queue.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param stats
+ * A pointer to structure of type *rte_pdump_stats* to be filled in.
+ * @return
+ * Zero if successful. -1 on error and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_pdump_stats(uint16_t port_id, struct rte_pdump_stats *stats);
+
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/pdump/version.map b/lib/pdump/version.map
index f0a9d12c9a9e..ce5502d9cdf4 100644
--- a/lib/pdump/version.map
+++ b/lib/pdump/version.map
@@ -10,3 +10,11 @@ DPDK_22 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ rte_pdump_enable_bpf;
+ rte_pdump_enable_bpf_by_deviceid;
+ rte_pdump_stats;
+};
--
2.30.2
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH v1 7/7] eal: promote mcfg API's to stable
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 7/7] eal: promote mcfg " Anatoly Burakov
@ 2021-09-10 16:23 0% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-09-10 16:23 UTC (permalink / raw)
To: Anatoly Burakov, dev
On 10/09/2021 13:30, Anatoly Burakov wrote:
> As per ABI policy, move the formerly experimental API's to the stable
> section.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> lib/eal/include/rte_eal_memconfig.h | 12 ------------
> lib/eal/version.map | 8 +++-----
> 2 files changed, 3 insertions(+), 17 deletions(-)
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 6/7] mem: promote DMA mask API's to stable
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 6/7] mem: promote DMA mask " Anatoly Burakov
@ 2021-09-10 15:56 0% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-09-10 15:56 UTC (permalink / raw)
To: Anatoly Burakov, dev
On 10/09/2021 13:30, Anatoly Burakov wrote:
> As per ABI policy, move the formerly experimental API's to the stable
> section.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> lib/eal/include/rte_memory.h | 12 ------------
> lib/eal/version.map | 6 +++---
> 2 files changed, 3 insertions(+), 15 deletions(-)
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 5/7] mem: promote extmem API's to stable
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 5/7] mem: promote extmem " Anatoly Burakov
@ 2021-09-10 15:56 0% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-09-10 15:56 UTC (permalink / raw)
To: Anatoly Burakov, dev
On 10/09/2021 13:30, Anatoly Burakov wrote:
> As per ABI policy, move the formerly experimental API's to the stable
> section.
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 4/7] mem: promote memseg API's to stable
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 4/7] mem: promote memseg " Anatoly Burakov
@ 2021-09-10 15:55 0% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-09-10 15:55 UTC (permalink / raw)
To: Anatoly Burakov, dev
On 10/09/2021 13:30, Anatoly Burakov wrote:
> As per ABI policy, move the formerly experimental API's to the stable
> section.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> lib/eal/include/rte_memory.h | 17 -----------------
> lib/eal/version.map | 34 +++++++++++++++++-----------------
> 2 files changed, 17 insertions(+), 34 deletions(-)
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 3/7] eal: promote malloc API's to stable
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 3/7] eal: promote malloc " Anatoly Burakov
@ 2021-09-10 15:53 0% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-09-10 15:53 UTC (permalink / raw)
To: Anatoly Burakov, dev
On 10/09/2021 13:30, Anatoly Burakov wrote:
> As per ABI policy, move the formerly experimental API's to the stable
> section.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> lib/eal/include/rte_malloc.h | 10 ----------
> lib/eal/version.map | 20 ++++++++++----------
> 2 files changed, 10 insertions(+), 20 deletions(-)
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 2/7] fbarray: promote API's to stable
2021-09-10 12:30 2% ` [dpdk-dev] [PATCH v1 2/7] fbarray: promote " Anatoly Burakov
@ 2021-09-10 15:52 0% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-09-10 15:52 UTC (permalink / raw)
To: Anatoly Burakov, dev
On 10/09/2021 13:30, Anatoly Burakov wrote:
> As per ABI policy, move the formerly experimental API's to the stable
> section.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> lib/eal/include/rte_fbarray.h | 26 ------------------
> lib/eal/version.map | 52 +++++++++++++++++------------------
> 2 files changed, 26 insertions(+), 52 deletions(-)
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 1/7] eal: promote IPC API's to stable
2021-09-10 12:30 3% [dpdk-dev] [PATCH v1 1/7] eal: promote IPC API's to stable Anatoly Burakov
` (5 preceding siblings ...)
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 7/7] eal: promote mcfg " Anatoly Burakov
@ 2021-09-10 15:51 0% ` Kinsella, Ray
6 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-09-10 15:51 UTC (permalink / raw)
To: Anatoly Burakov, dev
On 10/09/2021 13:30, Anatoly Burakov wrote:
> As per ABI policy, move the formerly experimental API's to the stable
> section.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> lib/eal/include/rte_eal.h | 24 ------------------------
> lib/eal/version.map | 14 ++++++--------
> 2 files changed, 6 insertions(+), 32 deletions(-)
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] Minutes of Technical Board Meeting, 2021-08-25
[not found] <DU2PR04MB8630C0339D3CFB3CDE952B1389D69@DU2PR04MB8630.eurprd04.prod.outlook.com>
@ 2021-09-10 14:25 4% ` Hemant Agrawal
0 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2021-09-10 14:25 UTC (permalink / raw)
To: dev, techboard
Minutes of Technical Board Meeting, 2021-08-25
Members Attending: 11/12
- Aaron Conole
- Bruce Richardson
- Ferruh Yigit
- Hemant Agrawal (Chair)
- Honnappa Nagarahalli
- Jerin Jacob
- Kevin Traynor
- Maxime Coquelin
- Olivier Matz
- Stephen Hemminger
- Thomas Monjalon
NOTE: The Technical Board meetings take place every second Wednesday on https://meet.jit.si/DPDK at 3 pm UTC.
Meetings are public, and DPDK community members are welcome to attend.
Agenda and minutes can be found at https://annuel.framapad.org/p/r.0c3cc4d1e011214183872a98f6b5c7db
NOTE: Next meeting will be on Wednesday 2021-09-08 @3pm UTC, and will be chaired by Honnappa
#1 DPDK User list
* Eco system user list required.
* A survey may help in generating such list.
* LF Marketing may also help in getting such list generated from public info.
#2 API/ABI breakage exception approval for vhost API
- The policy is that one should not break the API/ABI even for LTS releases without previous notification. Exception can be approved for LTS.
- In this particular case, they shall try to do function versioning. AI- Ferruh to send response.
#3 Documenting criteria on adding/removing members to technical board
* Document needs further reviews, please review.
* Will set a deadline for the document review in next meeting.
* Need clear rules for the review of existing members membership.
#4 ORAN API License
* ORAN forum is planning to use SCCL license in the API file. Will that be acceptable for DPDK integration?
* Members need more details. Currently distros may not favor it as it may cause issue in source code distribution.
* Need to discuss more if TB can send it for review by DPDK GB.
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [RFC 1/3] ethdev: update modify field flow action
@ 2021-09-10 14:16 9% Viacheslav Ovsiienko
0 siblings, 0 replies; 200+ results
From: Viacheslav Ovsiienko @ 2021-09-10 14:16 UTC (permalink / raw)
To: dev; +Cc: orika
The generic modify field flow action introduced in [1] has
some issues related to the immediate source operand:
- immediate source can be presented either as an unsigned
64-bit integer or pointer to data pattern in memory.
There was no explicit pointer field defined in the union
- the byte ordering for 64-bit integer was not specified.
Many fields have lesser lengths and byte ordering
is crucial.
- how the bit offset is applied to the immediate source
field was not defined and documented
- 64-bit integer size is not enough to provide MAC and
IPv6 addresses
In order to cover the issues and exclude any ambiguities
the following is done:
- introduce the explicit pointer field
in rte_flow_action_modify_data structure
- replace the 64-bit unsigned integer with 16-byte array
- update the modify field flow action documentation
[1] commit 73b68f4c54a0 ("ethdev: introduce generic modify flow action")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
doc/guides/prog_guide/rte_flow.rst | 8 ++++++++
doc/guides/rel_notes/release_21_11.rst | 7 +++++++
lib/ethdev/rte_flow.h | 15 ++++++++++++---
3 files changed, 27 insertions(+), 3 deletions(-)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2b42d5ec8c..a54760a7b4 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2835,6 +2835,14 @@ a packet to any other part of it.
``value`` sets an immediate value to be used as a source or points to a
location of the value in memory. It is used instead of ``level`` and ``offset``
for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER`` respectively.
+The data in memory should be presented exactly in the same byte order and
+length as in the relevant flow item, i.e. data for field with type
+RTE_FLOW_FIELD_MAC_DST should follow the conventions of dst field
+in rte_flow_item_eth structure, with type RTE_FLOW_FIELD_IPV6_SRC -
+rte_flow_item_ipv6 conventions, and so on. The bitfield exatracted from the
+memory being applied as second operation parameter is defined by width and
+the destination field offset. If the field size is large than 16 bytes the
+pattern can be provided as pointer only.
.. _table_rte_flow_action_modify_field:
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d707a554ef..fdeba27e14 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -84,6 +84,10 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* ethdev: ``rte_flow_action_modify_data`` structure udpdated, immediate data
+ array is extended, data pointer field is explicitly added to union, the
+ action behavior is defined in more strict fashion and documentation uddated.
+
ABI Changes
-----------
@@ -101,6 +105,9 @@ ABI Changes
=======================================================
+* ethdev: ``rte_flow_action_modify_data`` structure udpdated.
+
+
Known Issues
------------
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 70f455d47d..50e6f34579 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3204,6 +3204,9 @@ enum rte_flow_field_id {
};
/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
* Field description for MODIFY_FIELD action.
*/
struct rte_flow_action_modify_data {
@@ -3217,10 +3220,16 @@ struct rte_flow_action_modify_data {
uint32_t offset;
};
/**
- * Immediate value for RTE_FLOW_FIELD_VALUE or
- * memory address for RTE_FLOW_FIELD_POINTER.
+ * Immediate value for RTE_FLOW_FIELD_VALUE, presented in the
+ * same byte order and length as in relevant rte_flow_item_xxx.
*/
- uint64_t value;
+ uint8_t value[16];
+ /*
+ * Memory address for RTE_FLOW_FIELD_POINTER, memory layout
+ * should be the same as for relevant field in the
+ * rte_flow_item_xxx structure.
+ */
+ void *pvalue;
};
};
--
2.18.1
^ permalink raw reply [relevance 9%]
* [dpdk-dev] [PATCH v1 3/7] eal: promote malloc API's to stable
2021-09-10 12:30 3% [dpdk-dev] [PATCH v1 1/7] eal: promote IPC API's to stable Anatoly Burakov
2021-09-10 12:30 2% ` [dpdk-dev] [PATCH v1 2/7] fbarray: promote " Anatoly Burakov
@ 2021-09-10 12:30 3% ` Anatoly Burakov
2021-09-10 15:53 0% ` Kinsella, Ray
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 4/7] mem: promote memseg " Anatoly Burakov
` (4 subsequent siblings)
6 siblings, 1 reply; 200+ results
From: Anatoly Burakov @ 2021-09-10 12:30 UTC (permalink / raw)
To: dev, Ray Kinsella
As per ABI policy, move the formerly experimental API's to the stable
section.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
lib/eal/include/rte_malloc.h | 10 ----------
lib/eal/version.map | 20 ++++++++++----------
2 files changed, 10 insertions(+), 20 deletions(-)
diff --git a/lib/eal/include/rte_malloc.h b/lib/eal/include/rte_malloc.h
index 895bb6e849..ed02e15119 100644
--- a/lib/eal/include/rte_malloc.h
+++ b/lib/eal/include/rte_malloc.h
@@ -157,7 +157,6 @@ rte_realloc(void *ptr, size_t size, unsigned int align)
* align is not a power of two).
* - Otherwise, the pointer to the reallocated memory.
*/
-__rte_experimental
void *
rte_realloc_socket(void *ptr, size_t size, unsigned int align, int socket)
__rte_alloc_size(2);
@@ -339,7 +338,6 @@ rte_malloc_get_socket_stats(int socket,
* EPERM - attempted to add memory to a reserved heap
* ENOSPC - no more space in internal config to store a new memory chunk
*/
-__rte_experimental
int
rte_malloc_heap_memory_add(const char *heap_name, void *va_addr, size_t len,
rte_iova_t iova_addrs[], unsigned int n_pages, size_t page_sz);
@@ -371,7 +369,6 @@ rte_malloc_heap_memory_add(const char *heap_name, void *va_addr, size_t len,
* ENOENT - heap or memory chunk was not found
* EBUSY - memory chunk still contains data
*/
-__rte_experimental
int
rte_malloc_heap_memory_remove(const char *heap_name, void *va_addr, size_t len);
@@ -396,7 +393,6 @@ rte_malloc_heap_memory_remove(const char *heap_name, void *va_addr, size_t len);
* EPERM - attempted to attach memory to a reserved heap
* ENOENT - heap or memory chunk was not found
*/
-__rte_experimental
int
rte_malloc_heap_memory_attach(const char *heap_name, void *va_addr, size_t len);
@@ -421,7 +417,6 @@ rte_malloc_heap_memory_attach(const char *heap_name, void *va_addr, size_t len);
* EPERM - attempted to detach memory from a reserved heap
* ENOENT - heap or memory chunk was not found
*/
-__rte_experimental
int
rte_malloc_heap_memory_detach(const char *heap_name, void *va_addr, size_t len);
@@ -441,7 +436,6 @@ rte_malloc_heap_memory_detach(const char *heap_name, void *va_addr, size_t len);
* EEXIST - heap by name of ``heap_name`` already exists
* ENOSPC - no more space in internal config to store a new heap
*/
-__rte_experimental
int
rte_malloc_heap_create(const char *heap_name);
@@ -465,7 +459,6 @@ rte_malloc_heap_create(const char *heap_name);
* EPERM - attempting to destroy reserved heap
* EBUSY - heap still contains data
*/
-__rte_experimental
int
rte_malloc_heap_destroy(const char *heap_name);
@@ -480,7 +473,6 @@ rte_malloc_heap_destroy(const char *heap_name);
* EINVAL - ``name`` was NULL
* ENOENT - heap identified by the name ``name`` was not found
*/
-__rte_experimental
int
rte_malloc_heap_get_socket(const char *name);
@@ -496,7 +488,6 @@ rte_malloc_heap_get_socket(const char *name);
* 0 if socket ID refers to internal DPDK memory
* -1 if socket ID is invalid
*/
-__rte_experimental
int
rte_malloc_heap_socket_is_external(int socket_id);
@@ -527,7 +518,6 @@ rte_malloc_dump_stats(FILE *f, const char *type);
* @param f
* A pointer to a file for output
*/
-__rte_experimental
void
rte_malloc_dump_heaps(FILE *f);
diff --git a/lib/eal/version.map b/lib/eal/version.map
index d3b2b07a78..6f768e66d0 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -141,8 +141,17 @@ DPDK_22 {
rte_log_set_level_pattern;
rte_log_set_level_regexp;
rte_malloc;
+ rte_malloc_dump_heaps;
rte_malloc_dump_stats;
rte_malloc_get_socket_stats;
+ rte_malloc_heap_create;
+ rte_malloc_heap_destroy;
+ rte_malloc_heap_get_socket;
+ rte_malloc_heap_memory_add;
+ rte_malloc_heap_memory_attach;
+ rte_malloc_heap_memory_detach;
+ rte_malloc_heap_memory_remove;
+ rte_malloc_heap_socket_is_external;
rte_malloc_set_limit;
rte_malloc_socket;
rte_malloc_validate;
@@ -181,6 +190,7 @@ DPDK_22 {
rte_openlog_stream;
rte_rand;
rte_realloc;
+ rte_realloc_socket;
rte_reciprocal_value;
rte_reciprocal_value_u64;
rte_rtm_supported;
@@ -262,7 +272,6 @@ EXPERIMENTAL {
rte_dev_event_monitor_start; # WINDOWS_NO_EXPORT
rte_dev_event_monitor_stop; # WINDOWS_NO_EXPORT
rte_log_register_type_and_pick_level;
- rte_malloc_dump_heaps;
rte_mem_alloc_validator_register;
rte_mem_alloc_validator_unregister;
rte_mem_check_dma_mask;
@@ -291,14 +300,6 @@ EXPERIMENTAL {
rte_dev_event_callback_process;
rte_dev_hotplug_handle_disable; # WINDOWS_NO_EXPORT
rte_dev_hotplug_handle_enable; # WINDOWS_NO_EXPORT
- rte_malloc_heap_create;
- rte_malloc_heap_destroy;
- rte_malloc_heap_get_socket;
- rte_malloc_heap_memory_add;
- rte_malloc_heap_memory_attach;
- rte_malloc_heap_memory_detach;
- rte_malloc_heap_memory_remove;
- rte_malloc_heap_socket_is_external;
rte_mem_check_dma_mask_thread_unsafe;
rte_mem_set_dma_mask;
rte_memseg_get_fd;
@@ -316,7 +317,6 @@ EXPERIMENTAL {
rte_dev_dma_map;
rte_dev_dma_unmap;
rte_intr_callback_unregister_pending;
- rte_realloc_socket;
# added in 19.08
rte_intr_ack;
--
2.25.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v1 4/7] mem: promote memseg API's to stable
2021-09-10 12:30 3% [dpdk-dev] [PATCH v1 1/7] eal: promote IPC API's to stable Anatoly Burakov
2021-09-10 12:30 2% ` [dpdk-dev] [PATCH v1 2/7] fbarray: promote " Anatoly Burakov
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 3/7] eal: promote malloc " Anatoly Burakov
@ 2021-09-10 12:30 3% ` Anatoly Burakov
2021-09-10 15:55 0% ` Kinsella, Ray
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 5/7] mem: promote extmem " Anatoly Burakov
` (3 subsequent siblings)
6 siblings, 1 reply; 200+ results
From: Anatoly Burakov @ 2021-09-10 12:30 UTC (permalink / raw)
To: dev, Ray Kinsella
As per ABI policy, move the formerly experimental API's to the stable
section.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
lib/eal/include/rte_memory.h | 17 -----------------
lib/eal/version.map | 34 +++++++++++++++++-----------------
2 files changed, 17 insertions(+), 34 deletions(-)
diff --git a/lib/eal/include/rte_memory.h b/lib/eal/include/rte_memory.h
index bba9b5300a..4acb2a72a8 100644
--- a/lib/eal/include/rte_memory.h
+++ b/lib/eal/include/rte_memory.h
@@ -127,7 +127,6 @@ rte_iova_t rte_mem_virt2iova(const void *virt);
* Virtual address corresponding to iova address (or NULL if address does not
* exist within DPDK memory map).
*/
-__rte_experimental
void *
rte_mem_iova2virt(rte_iova_t iova);
@@ -142,7 +141,6 @@ rte_mem_iova2virt(rte_iova_t iova);
* @return
* Memseg pointer on success, or NULL on error.
*/
-__rte_experimental
struct rte_memseg *
rte_mem_virt2memseg(const void *virt, const struct rte_memseg_list *msl);
@@ -154,7 +152,6 @@ rte_mem_virt2memseg(const void *virt, const struct rte_memseg_list *msl);
* @return
* Memseg list to which this virtual address belongs to.
*/
-__rte_experimental
struct rte_memseg_list *
rte_mem_virt2memseg_list(const void *virt);
@@ -209,7 +206,6 @@ typedef int (*rte_memseg_list_walk_t)(const struct rte_memseg_list *msl,
* 1 if stopped by the user
* -1 if user function reported error
*/
-__rte_experimental
int
rte_memseg_walk(rte_memseg_walk_t func, void *arg);
@@ -231,7 +227,6 @@ rte_memseg_walk(rte_memseg_walk_t func, void *arg);
* 1 if stopped by the user
* -1 if user function reported error
*/
-__rte_experimental
int
rte_memseg_contig_walk(rte_memseg_contig_walk_t func, void *arg);
@@ -253,7 +248,6 @@ rte_memseg_contig_walk(rte_memseg_contig_walk_t func, void *arg);
* 1 if stopped by the user
* -1 if user function reported error
*/
-__rte_experimental
int
rte_memseg_list_walk(rte_memseg_list_walk_t func, void *arg);
@@ -272,7 +266,6 @@ rte_memseg_list_walk(rte_memseg_list_walk_t func, void *arg);
* 1 if stopped by the user
* -1 if user function reported error
*/
-__rte_experimental
int
rte_memseg_walk_thread_unsafe(rte_memseg_walk_t func, void *arg);
@@ -291,7 +284,6 @@ rte_memseg_walk_thread_unsafe(rte_memseg_walk_t func, void *arg);
* 1 if stopped by the user
* -1 if user function reported error
*/
-__rte_experimental
int
rte_memseg_contig_walk_thread_unsafe(rte_memseg_contig_walk_t func, void *arg);
@@ -310,7 +302,6 @@ rte_memseg_contig_walk_thread_unsafe(rte_memseg_contig_walk_t func, void *arg);
* 1 if stopped by the user
* -1 if user function reported error
*/
-__rte_experimental
int
rte_memseg_list_walk_thread_unsafe(rte_memseg_list_walk_t func, void *arg);
@@ -335,7 +326,6 @@ rte_memseg_list_walk_thread_unsafe(rte_memseg_list_walk_t func, void *arg);
* - ENOENT - ``ms`` is an unused segment
* - ENOTSUP - segment fd's are not supported
*/
-__rte_experimental
int
rte_memseg_get_fd(const struct rte_memseg *ms);
@@ -360,7 +350,6 @@ rte_memseg_get_fd(const struct rte_memseg *ms);
* - ENOENT - ``ms`` is an unused segment
* - ENOTSUP - segment fd's are not supported
*/
-__rte_experimental
int
rte_memseg_get_fd_thread_unsafe(const struct rte_memseg *ms);
@@ -385,7 +374,6 @@ rte_memseg_get_fd_thread_unsafe(const struct rte_memseg *ms);
* - ENOENT - ``ms`` is an unused segment
* - ENOTSUP - segment fd's are not supported
*/
-__rte_experimental
int
rte_memseg_get_fd_offset(const struct rte_memseg *ms, size_t *offset);
@@ -410,7 +398,6 @@ rte_memseg_get_fd_offset(const struct rte_memseg *ms, size_t *offset);
* - ENOENT - ``ms`` is an unused segment
* - ENOTSUP - segment fd's are not supported
*/
-__rte_experimental
int
rte_memseg_get_fd_offset_thread_unsafe(const struct rte_memseg *ms,
size_t *offset);
@@ -678,7 +665,6 @@ typedef void (*rte_mem_event_callback_t)(enum rte_mem_event event_type,
* -1 on unsuccessful callback register, with rte_errno value indicating
* reason for failure.
*/
-__rte_experimental
int
rte_mem_event_callback_register(const char *name, rte_mem_event_callback_t clb,
void *arg);
@@ -697,7 +683,6 @@ rte_mem_event_callback_register(const char *name, rte_mem_event_callback_t clb,
* -1 on unsuccessful callback unregister, with rte_errno value indicating
* reason for failure.
*/
-__rte_experimental
int
rte_mem_event_callback_unregister(const char *name, void *arg);
@@ -747,7 +732,6 @@ typedef int (*rte_mem_alloc_validator_t)(int socket_id,
* -1 on unsuccessful callback register, with rte_errno value indicating
* reason for failure.
*/
-__rte_experimental
int
rte_mem_alloc_validator_register(const char *name,
rte_mem_alloc_validator_t clb, int socket_id, size_t limit);
@@ -766,7 +750,6 @@ rte_mem_alloc_validator_register(const char *name,
* -1 on unsuccessful callback unregister, with rte_errno value indicating
* reason for failure.
*/
-__rte_experimental
int
rte_mem_alloc_validator_unregister(const char *name, int socket_id);
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 6f768e66d0..359b784c16 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -168,12 +168,29 @@ DPDK_22 {
rte_mcfg_tailq_read_unlock;
rte_mcfg_tailq_write_lock;
rte_mcfg_tailq_write_unlock;
+ rte_mem_alloc_validator_register;
+ rte_mem_alloc_validator_unregister;
+ rte_mem_event_callback_register;
+ rte_mem_event_callback_unregister;
+ rte_mem_iova2virt;
rte_mem_lock_page;
rte_mem_virt2iova;
+ rte_mem_virt2memseg;
+ rte_mem_virt2memseg_list;
rte_mem_virt2phy;
rte_memdump;
rte_memory_get_nchannel;
rte_memory_get_nrank;
+ rte_memseg_contig_walk;
+ rte_memseg_contig_walk_thread_unsafe;
+ rte_memseg_get_fd;
+ rte_memseg_get_fd_offset;
+ rte_memseg_get_fd_offset_thread_unsafe;
+ rte_memseg_get_fd_thread_unsafe;
+ rte_memseg_list_walk;
+ rte_memseg_list_walk_thread_unsafe;
+ rte_memseg_walk;
+ rte_memseg_walk_thread_unsafe;
rte_memzone_dump;
rte_memzone_free;
rte_memzone_lookup;
@@ -272,17 +289,7 @@ EXPERIMENTAL {
rte_dev_event_monitor_start; # WINDOWS_NO_EXPORT
rte_dev_event_monitor_stop; # WINDOWS_NO_EXPORT
rte_log_register_type_and_pick_level;
- rte_mem_alloc_validator_register;
- rte_mem_alloc_validator_unregister;
rte_mem_check_dma_mask;
- rte_mem_event_callback_register;
- rte_mem_event_callback_unregister;
- rte_mem_iova2virt;
- rte_mem_virt2memseg;
- rte_mem_virt2memseg_list;
- rte_memseg_contig_walk;
- rte_memseg_list_walk;
- rte_memseg_walk;
# added in 18.08
rte_class_find;
@@ -291,9 +298,6 @@ EXPERIMENTAL {
rte_class_unregister;
rte_dev_iterator_init;
rte_dev_iterator_next;
- rte_memseg_contig_walk_thread_unsafe;
- rte_memseg_list_walk_thread_unsafe;
- rte_memseg_walk_thread_unsafe;
# added in 18.11
rte_delay_us_sleep;
@@ -302,10 +306,6 @@ EXPERIMENTAL {
rte_dev_hotplug_handle_enable; # WINDOWS_NO_EXPORT
rte_mem_check_dma_mask_thread_unsafe;
rte_mem_set_dma_mask;
- rte_memseg_get_fd;
- rte_memseg_get_fd_offset;
- rte_memseg_get_fd_offset_thread_unsafe;
- rte_memseg_get_fd_thread_unsafe;
# added in 19.02
rte_extmem_attach;
--
2.25.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v1 2/7] fbarray: promote API's to stable
2021-09-10 12:30 3% [dpdk-dev] [PATCH v1 1/7] eal: promote IPC API's to stable Anatoly Burakov
@ 2021-09-10 12:30 2% ` Anatoly Burakov
2021-09-10 15:52 0% ` Kinsella, Ray
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 3/7] eal: promote malloc " Anatoly Burakov
` (5 subsequent siblings)
6 siblings, 1 reply; 200+ results
From: Anatoly Burakov @ 2021-09-10 12:30 UTC (permalink / raw)
To: dev, Ray Kinsella
As per ABI policy, move the formerly experimental API's to the stable
section.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
lib/eal/include/rte_fbarray.h | 26 ------------------
lib/eal/version.map | 52 +++++++++++++++++------------------
2 files changed, 26 insertions(+), 52 deletions(-)
diff --git a/lib/eal/include/rte_fbarray.h b/lib/eal/include/rte_fbarray.h
index 6dccdbec98..c64868711e 100644
--- a/lib/eal/include/rte_fbarray.h
+++ b/lib/eal/include/rte_fbarray.h
@@ -74,7 +74,6 @@ struct rte_fbarray {
* - 0 on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len,
unsigned int elt_sz);
@@ -97,7 +96,6 @@ rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len,
* - 0 on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_attach(struct rte_fbarray *arr);
@@ -119,7 +117,6 @@ rte_fbarray_attach(struct rte_fbarray *arr);
* - 0 on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_destroy(struct rte_fbarray *arr);
@@ -138,7 +135,6 @@ rte_fbarray_destroy(struct rte_fbarray *arr);
* - 0 on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_detach(struct rte_fbarray *arr);
@@ -156,7 +152,6 @@ rte_fbarray_detach(struct rte_fbarray *arr);
* - non-NULL pointer on success.
* - NULL on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
void *
rte_fbarray_get(const struct rte_fbarray *arr, unsigned int idx);
@@ -174,7 +169,6 @@ rte_fbarray_get(const struct rte_fbarray *arr, unsigned int idx);
* - non-negative integer on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_find_idx(const struct rte_fbarray *arr, const void *elt);
@@ -192,7 +186,6 @@ rte_fbarray_find_idx(const struct rte_fbarray *arr, const void *elt);
* - 0 on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_set_used(struct rte_fbarray *arr, unsigned int idx);
@@ -210,7 +203,6 @@ rte_fbarray_set_used(struct rte_fbarray *arr, unsigned int idx);
* - 0 on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_set_free(struct rte_fbarray *arr, unsigned int idx);
@@ -229,7 +221,6 @@ rte_fbarray_set_free(struct rte_fbarray *arr, unsigned int idx);
* - 0 if element is unused.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_is_used(struct rte_fbarray *arr, unsigned int idx);
@@ -247,7 +238,6 @@ rte_fbarray_is_used(struct rte_fbarray *arr, unsigned int idx);
* - non-negative integer on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_find_next_free(struct rte_fbarray *arr, unsigned int start);
@@ -265,7 +255,6 @@ rte_fbarray_find_next_free(struct rte_fbarray *arr, unsigned int start);
* - non-negative integer on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_find_next_used(struct rte_fbarray *arr, unsigned int start);
@@ -286,7 +275,6 @@ rte_fbarray_find_next_used(struct rte_fbarray *arr, unsigned int start);
* - non-negative integer on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_find_next_n_free(struct rte_fbarray *arr, unsigned int start,
unsigned int n);
@@ -308,7 +296,6 @@ rte_fbarray_find_next_n_free(struct rte_fbarray *arr, unsigned int start,
* - non-negative integer on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_find_next_n_used(struct rte_fbarray *arr, unsigned int start,
unsigned int n);
@@ -327,7 +314,6 @@ rte_fbarray_find_next_n_used(struct rte_fbarray *arr, unsigned int start,
* - non-negative integer on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_find_contig_free(struct rte_fbarray *arr,
unsigned int start);
@@ -346,7 +332,6 @@ rte_fbarray_find_contig_free(struct rte_fbarray *arr,
* - non-negative integer on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_find_contig_used(struct rte_fbarray *arr, unsigned int start);
@@ -363,7 +348,6 @@ rte_fbarray_find_contig_used(struct rte_fbarray *arr, unsigned int start);
* - non-negative integer on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_find_prev_free(struct rte_fbarray *arr, unsigned int start);
@@ -381,7 +365,6 @@ rte_fbarray_find_prev_free(struct rte_fbarray *arr, unsigned int start);
* - non-negative integer on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_find_prev_used(struct rte_fbarray *arr, unsigned int start);
@@ -403,7 +386,6 @@ rte_fbarray_find_prev_used(struct rte_fbarray *arr, unsigned int start);
* - non-negative integer on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_find_prev_n_free(struct rte_fbarray *arr, unsigned int start,
unsigned int n);
@@ -426,7 +408,6 @@ rte_fbarray_find_prev_n_free(struct rte_fbarray *arr, unsigned int start,
* - non-negative integer on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_find_prev_n_used(struct rte_fbarray *arr, unsigned int start,
unsigned int n);
@@ -446,7 +427,6 @@ rte_fbarray_find_prev_n_used(struct rte_fbarray *arr, unsigned int start,
* - non-negative integer on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_find_rev_contig_free(struct rte_fbarray *arr,
unsigned int start);
@@ -466,7 +446,6 @@ rte_fbarray_find_rev_contig_free(struct rte_fbarray *arr,
* - non-negative integer on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_find_rev_contig_used(struct rte_fbarray *arr, unsigned int start);
@@ -484,7 +463,6 @@ rte_fbarray_find_rev_contig_used(struct rte_fbarray *arr, unsigned int start);
* - non-negative integer on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_find_biggest_free(struct rte_fbarray *arr, unsigned int start);
@@ -502,7 +480,6 @@ rte_fbarray_find_biggest_free(struct rte_fbarray *arr, unsigned int start);
* - non-negative integer on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_find_biggest_used(struct rte_fbarray *arr, unsigned int start);
@@ -521,7 +498,6 @@ rte_fbarray_find_biggest_used(struct rte_fbarray *arr, unsigned int start);
* - non-negative integer on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_find_rev_biggest_free(struct rte_fbarray *arr, unsigned int start);
@@ -540,7 +516,6 @@ rte_fbarray_find_rev_biggest_free(struct rte_fbarray *arr, unsigned int start);
* - non-negative integer on success.
* - -1 on failure, with ``rte_errno`` indicating reason for failure.
*/
-__rte_experimental
int
rte_fbarray_find_rev_biggest_used(struct rte_fbarray *arr, unsigned int start);
@@ -554,7 +529,6 @@ rte_fbarray_find_rev_biggest_used(struct rte_fbarray *arr, unsigned int start);
* @param f
* File object to dump information into.
*/
-__rte_experimental
void
rte_fbarray_dump_metadata(struct rte_fbarray *arr, FILE *f);
diff --git a/lib/eal/version.map b/lib/eal/version.map
index c6e78d68d1..d3b2b07a78 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -70,6 +70,32 @@ DPDK_22 {
rte_epoll_ctl;
rte_epoll_wait;
rte_exit;
+ rte_fbarray_attach;
+ rte_fbarray_destroy;
+ rte_fbarray_detach;
+ rte_fbarray_dump_metadata;
+ rte_fbarray_find_biggest_free;
+ rte_fbarray_find_biggest_used;
+ rte_fbarray_find_contig_free;
+ rte_fbarray_find_contig_used;
+ rte_fbarray_find_idx;
+ rte_fbarray_find_next_free;
+ rte_fbarray_find_next_n_free;
+ rte_fbarray_find_next_n_used;
+ rte_fbarray_find_next_used;
+ rte_fbarray_find_prev_free;
+ rte_fbarray_find_prev_n_free;
+ rte_fbarray_find_prev_n_used;
+ rte_fbarray_find_prev_used;
+ rte_fbarray_find_rev_contig_free;
+ rte_fbarray_find_rev_contig_used;
+ rte_fbarray_find_rev_biggest_free;
+ rte_fbarray_find_rev_biggest_used;
+ rte_fbarray_get;
+ rte_fbarray_init;
+ rte_fbarray_is_used;
+ rte_fbarray_set_free;
+ rte_fbarray_set_used;
rte_free;
rte_get_hpet_cycles; # WINDOWS_NO_EXPORT
rte_get_hpet_hz; # WINDOWS_NO_EXPORT
@@ -235,22 +261,6 @@ EXPERIMENTAL {
rte_dev_event_callback_unregister;
rte_dev_event_monitor_start; # WINDOWS_NO_EXPORT
rte_dev_event_monitor_stop; # WINDOWS_NO_EXPORT
- rte_fbarray_attach;
- rte_fbarray_destroy;
- rte_fbarray_detach;
- rte_fbarray_dump_metadata;
- rte_fbarray_find_contig_free;
- rte_fbarray_find_contig_used;
- rte_fbarray_find_idx;
- rte_fbarray_find_next_free;
- rte_fbarray_find_next_n_free;
- rte_fbarray_find_next_n_used;
- rte_fbarray_find_next_used;
- rte_fbarray_get;
- rte_fbarray_init;
- rte_fbarray_is_used;
- rte_fbarray_set_free;
- rte_fbarray_set_used;
rte_log_register_type_and_pick_level;
rte_malloc_dump_heaps;
rte_mem_alloc_validator_register;
@@ -272,12 +282,6 @@ EXPERIMENTAL {
rte_class_unregister;
rte_dev_iterator_init;
rte_dev_iterator_next;
- rte_fbarray_find_prev_free;
- rte_fbarray_find_prev_n_free;
- rte_fbarray_find_prev_n_used;
- rte_fbarray_find_prev_used;
- rte_fbarray_find_rev_contig_free;
- rte_fbarray_find_rev_contig_used;
rte_memseg_contig_walk_thread_unsafe;
rte_memseg_list_walk_thread_unsafe;
rte_memseg_walk_thread_unsafe;
@@ -311,10 +315,6 @@ EXPERIMENTAL {
# added in 19.05
rte_dev_dma_map;
rte_dev_dma_unmap;
- rte_fbarray_find_biggest_free;
- rte_fbarray_find_biggest_used;
- rte_fbarray_find_rev_biggest_free;
- rte_fbarray_find_rev_biggest_used;
rte_intr_callback_unregister_pending;
rte_realloc_socket;
--
2.25.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v1 6/7] mem: promote DMA mask API's to stable
2021-09-10 12:30 3% [dpdk-dev] [PATCH v1 1/7] eal: promote IPC API's to stable Anatoly Burakov
` (3 preceding siblings ...)
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 5/7] mem: promote extmem " Anatoly Burakov
@ 2021-09-10 12:30 3% ` Anatoly Burakov
2021-09-10 15:56 0% ` Kinsella, Ray
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 7/7] eal: promote mcfg " Anatoly Burakov
2021-09-10 15:51 0% ` [dpdk-dev] [PATCH v1 1/7] eal: promote IPC " Kinsella, Ray
6 siblings, 1 reply; 200+ results
From: Anatoly Burakov @ 2021-09-10 12:30 UTC (permalink / raw)
To: dev, Ray Kinsella
As per ABI policy, move the formerly experimental API's to the stable
section.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
lib/eal/include/rte_memory.h | 12 ------------
lib/eal/version.map | 6 +++---
2 files changed, 3 insertions(+), 15 deletions(-)
diff --git a/lib/eal/include/rte_memory.h b/lib/eal/include/rte_memory.h
index c68b9d5e62..6d018629ae 100644
--- a/lib/eal/include/rte_memory.h
+++ b/lib/eal/include/rte_memory.h
@@ -553,22 +553,15 @@ unsigned rte_memory_get_nchannel(void);
unsigned rte_memory_get_nrank(void);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* Check if all currently allocated memory segments are compliant with
* supplied DMA address width.
*
* @param maskbits
* Address width to check against.
*/
-__rte_experimental
int rte_mem_check_dma_mask(uint8_t maskbits);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* Check if all currently allocated memory segments are compliant with
* supplied DMA address width. This function will use
* rte_memseg_walk_thread_unsafe instead of rte_memseg_walk implying
@@ -581,18 +574,13 @@ int rte_mem_check_dma_mask(uint8_t maskbits);
* @param maskbits
* Address width to check against.
*/
-__rte_experimental
int rte_mem_check_dma_mask_thread_unsafe(uint8_t maskbits);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* Set dma mask to use once memory initialization is done. Previous functions
* rte_mem_check_dma_mask and rte_mem_check_dma_mask_thread_unsafe can not be
* used safely until memory has been initialized.
*/
-__rte_experimental
void rte_mem_set_dma_mask(uint8_t maskbits);
/**
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 420779e1aa..ebafb05e90 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -174,10 +174,13 @@ DPDK_22 {
rte_mcfg_tailq_write_unlock;
rte_mem_alloc_validator_register;
rte_mem_alloc_validator_unregister;
+ rte_mem_check_dma_mask;
+ rte_mem_check_dma_mask_thread_unsafe;
rte_mem_event_callback_register;
rte_mem_event_callback_unregister;
rte_mem_iova2virt;
rte_mem_lock_page;
+ rte_mem_set_dma_mask;
rte_mem_virt2iova;
rte_mem_virt2memseg;
rte_mem_virt2memseg_list;
@@ -293,7 +296,6 @@ EXPERIMENTAL {
rte_dev_event_monitor_start; # WINDOWS_NO_EXPORT
rte_dev_event_monitor_stop; # WINDOWS_NO_EXPORT
rte_log_register_type_and_pick_level;
- rte_mem_check_dma_mask;
# added in 18.08
rte_class_find;
@@ -308,8 +310,6 @@ EXPERIMENTAL {
rte_dev_event_callback_process;
rte_dev_hotplug_handle_disable; # WINDOWS_NO_EXPORT
rte_dev_hotplug_handle_enable; # WINDOWS_NO_EXPORT
- rte_mem_check_dma_mask_thread_unsafe;
- rte_mem_set_dma_mask;
# added in 19.05
rte_dev_dma_map;
--
2.25.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v1 7/7] eal: promote mcfg API's to stable
2021-09-10 12:30 3% [dpdk-dev] [PATCH v1 1/7] eal: promote IPC API's to stable Anatoly Burakov
` (4 preceding siblings ...)
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 6/7] mem: promote DMA mask " Anatoly Burakov
@ 2021-09-10 12:30 3% ` Anatoly Burakov
2021-09-10 16:23 0% ` Kinsella, Ray
2021-09-10 15:51 0% ` [dpdk-dev] [PATCH v1 1/7] eal: promote IPC " Kinsella, Ray
6 siblings, 1 reply; 200+ results
From: Anatoly Burakov @ 2021-09-10 12:30 UTC (permalink / raw)
To: dev, Ray Kinsella
As per ABI policy, move the formerly experimental API's to the stable
section.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
lib/eal/include/rte_eal_memconfig.h | 12 ------------
lib/eal/version.map | 8 +++-----
2 files changed, 3 insertions(+), 17 deletions(-)
diff --git a/lib/eal/include/rte_eal_memconfig.h b/lib/eal/include/rte_eal_memconfig.h
index dede2ee324..44f5324906 100644
--- a/lib/eal/include/rte_eal_memconfig.h
+++ b/lib/eal/include/rte_eal_memconfig.h
@@ -92,33 +92,21 @@ void
rte_mcfg_mempool_write_unlock(void);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* Lock the internal EAL Timer Library lock for exclusive access.
*/
-__rte_experimental
void
rte_mcfg_timer_lock(void);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* Unlock the internal EAL Timer Library lock for exclusive access.
*/
-__rte_experimental
void
rte_mcfg_timer_unlock(void);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* If true, pages are put in single files (per memseg list),
* as opposed to creating a file per page.
*/
-__rte_experimental
bool
rte_mcfg_get_single_file_segments(void);
diff --git a/lib/eal/version.map b/lib/eal/version.map
index ebafb05e90..ec05d1164b 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -160,6 +160,7 @@ DPDK_22 {
rte_malloc_socket;
rte_malloc_validate;
rte_malloc_virt2iova;
+ rte_mcfg_get_single_file_segments;
rte_mcfg_mem_read_lock;
rte_mcfg_mem_read_unlock;
rte_mcfg_mem_write_lock;
@@ -172,6 +173,8 @@ DPDK_22 {
rte_mcfg_tailq_read_unlock;
rte_mcfg_tailq_write_lock;
rte_mcfg_tailq_write_unlock;
+ rte_mcfg_timer_lock;
+ rte_mcfg_timer_unlock;
rte_mem_alloc_validator_register;
rte_mem_alloc_validator_unregister;
rte_mem_check_dma_mask;
@@ -320,13 +323,8 @@ EXPERIMENTAL {
rte_intr_ack;
rte_lcore_cpuset;
rte_lcore_to_cpu_id;
- rte_mcfg_timer_lock;
- rte_mcfg_timer_unlock;
rte_rand_max; # WINDOWS_NO_EXPORT
- # added in 19.11
- rte_mcfg_get_single_file_segments;
-
# added in 20.02
rte_thread_is_intr;
--
2.25.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v1 1/7] eal: promote IPC API's to stable
@ 2021-09-10 12:30 3% Anatoly Burakov
2021-09-10 12:30 2% ` [dpdk-dev] [PATCH v1 2/7] fbarray: promote " Anatoly Burakov
` (6 more replies)
0 siblings, 7 replies; 200+ results
From: Anatoly Burakov @ 2021-09-10 12:30 UTC (permalink / raw)
To: dev, Ray Kinsella
As per ABI policy, move the formerly experimental API's to the stable
section.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
lib/eal/include/rte_eal.h | 24 ------------------------
lib/eal/version.map | 14 ++++++--------
2 files changed, 6 insertions(+), 32 deletions(-)
diff --git a/lib/eal/include/rte_eal.h b/lib/eal/include/rte_eal.h
index eaf6469e50..f1af86bfff 100644
--- a/lib/eal/include/rte_eal.h
+++ b/lib/eal/include/rte_eal.h
@@ -209,9 +209,6 @@ typedef int (*rte_mp_async_reply_t)(const struct rte_mp_msg *request,
const struct rte_mp_reply *reply);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* Register an action function for primary/secondary communication.
*
* Call this function to register an action, if the calling component wants
@@ -231,14 +228,10 @@ typedef int (*rte_mp_async_reply_t)(const struct rte_mp_msg *request,
* - 0 on success.
* - (<0) on failure.
*/
-__rte_experimental
int
rte_mp_action_register(const char *name, rte_mp_t action);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* Unregister an action function for primary/secondary communication.
*
* Call this function to unregister an action if the calling component does
@@ -252,14 +245,10 @@ rte_mp_action_register(const char *name, rte_mp_t action);
* The name argument plays as the nonredundant key to find the action.
*
*/
-__rte_experimental
void
rte_mp_action_unregister(const char *name);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* Send a message to the peer process.
*
* This function will send a message which will be responded by the action
@@ -272,14 +261,10 @@ rte_mp_action_unregister(const char *name);
* - On success, return 0.
* - On failure, return -1, and the reason will be stored in rte_errno.
*/
-__rte_experimental
int
rte_mp_sendmsg(struct rte_mp_msg *msg);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* Send a request to the peer process and expect a reply.
*
* This function sends a request message to the peer process, and will
@@ -307,15 +292,11 @@ rte_mp_sendmsg(struct rte_mp_msg *msg);
* - On success, return 0.
* - On failure, return -1, and the reason will be stored in rte_errno.
*/
-__rte_experimental
int
rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply,
const struct timespec *ts);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* Send a request to the peer process and expect a reply in a separate callback.
*
* This function sends a request message to the peer process, and will not
@@ -337,15 +318,11 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply,
* - On success, return 0.
* - On failure, return -1, and the reason will be stored in rte_errno.
*/
-__rte_experimental
int
rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts,
rte_mp_async_reply_t clb);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* Send a reply to the peer process.
*
* This function will send a reply message in response to a request message
@@ -366,7 +343,6 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts,
* - On success, return 0.
* - On failure, return -1, and the reason will be stored in rte_errno.
*/
-__rte_experimental
int
rte_mp_reply(struct rte_mp_msg *msg, const char *peer);
diff --git a/lib/eal/version.map b/lib/eal/version.map
index beeb986adc..c6e78d68d1 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -146,6 +146,12 @@ DPDK_22 {
rte_memzone_reserve_aligned;
rte_memzone_reserve_bounded;
rte_memzone_walk;
+ rte_mp_action_register;
+ rte_mp_action_unregister;
+ rte_mp_reply;
+ rte_mp_request_async;
+ rte_mp_request_sync;
+ rte_mp_sendmsg;
rte_openlog_stream;
rte_rand;
rte_realloc;
@@ -224,12 +230,6 @@ DPDK_22 {
EXPERIMENTAL {
global:
- # added in 18.02
- rte_mp_action_register;
- rte_mp_action_unregister;
- rte_mp_reply;
- rte_mp_sendmsg;
-
# added in 18.05
rte_dev_event_callback_register;
rte_dev_event_callback_unregister;
@@ -264,8 +264,6 @@ EXPERIMENTAL {
rte_memseg_contig_walk;
rte_memseg_list_walk;
rte_memseg_walk;
- rte_mp_request_async;
- rte_mp_request_sync;
# added in 18.08
rte_class_find;
--
2.25.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v1 5/7] mem: promote extmem API's to stable
2021-09-10 12:30 3% [dpdk-dev] [PATCH v1 1/7] eal: promote IPC API's to stable Anatoly Burakov
` (2 preceding siblings ...)
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 4/7] mem: promote memseg " Anatoly Burakov
@ 2021-09-10 12:30 3% ` Anatoly Burakov
2021-09-10 15:56 0% ` Kinsella, Ray
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 6/7] mem: promote DMA mask " Anatoly Burakov
` (2 subsequent siblings)
6 siblings, 1 reply; 200+ results
From: Anatoly Burakov @ 2021-09-10 12:30 UTC (permalink / raw)
To: dev, Ray Kinsella
As per ABI policy, move the formerly experimental API's to the stable
section.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
lib/eal/include/rte_memory.h | 16 ----------------
lib/eal/version.map | 10 ++++------
2 files changed, 4 insertions(+), 22 deletions(-)
diff --git a/lib/eal/include/rte_memory.h b/lib/eal/include/rte_memory.h
index 4acb2a72a8..c68b9d5e62 100644
--- a/lib/eal/include/rte_memory.h
+++ b/lib/eal/include/rte_memory.h
@@ -403,9 +403,6 @@ rte_memseg_get_fd_offset_thread_unsafe(const struct rte_memseg *ms,
size_t *offset);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* Register external memory chunk with DPDK.
*
* @note Using this API is mutually exclusive with ``rte_malloc`` family of
@@ -439,15 +436,11 @@ rte_memseg_get_fd_offset_thread_unsafe(const struct rte_memseg *ms,
* EEXIST - memory chunk is already registered
* ENOSPC - no more space in internal config to store a new memory chunk
*/
-__rte_experimental
int
rte_extmem_register(void *va_addr, size_t len, rte_iova_t iova_addrs[],
unsigned int n_pages, size_t page_sz);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* Unregister external memory chunk with DPDK.
*
* @note Using this API is mutually exclusive with ``rte_malloc`` family of
@@ -470,14 +463,10 @@ rte_extmem_register(void *va_addr, size_t len, rte_iova_t iova_addrs[],
* EINVAL - one of the parameters was invalid
* ENOENT - memory chunk was not found
*/
-__rte_experimental
int
rte_extmem_unregister(void *va_addr, size_t len);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* Attach to external memory chunk registered in another process.
*
* @note Using this API is mutually exclusive with ``rte_malloc`` family of
@@ -497,14 +486,10 @@ rte_extmem_unregister(void *va_addr, size_t len);
* EINVAL - one of the parameters was invalid
* ENOENT - memory chunk was not found
*/
-__rte_experimental
int
rte_extmem_attach(void *va_addr, size_t len);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* Detach from external memory chunk registered in another process.
*
* @note Using this API is mutually exclusive with ``rte_malloc`` family of
@@ -524,7 +509,6 @@ rte_extmem_attach(void *va_addr, size_t len);
* EINVAL - one of the parameters was invalid
* ENOENT - memory chunk was not found
*/
-__rte_experimental
int
rte_extmem_detach(void *va_addr, size_t len);
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 359b784c16..420779e1aa 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -70,6 +70,10 @@ DPDK_22 {
rte_epoll_ctl;
rte_epoll_wait;
rte_exit;
+ rte_extmem_attach;
+ rte_extmem_detach;
+ rte_extmem_register;
+ rte_extmem_unregister;
rte_fbarray_attach;
rte_fbarray_destroy;
rte_fbarray_detach;
@@ -307,12 +311,6 @@ EXPERIMENTAL {
rte_mem_check_dma_mask_thread_unsafe;
rte_mem_set_dma_mask;
- # added in 19.02
- rte_extmem_attach;
- rte_extmem_detach;
- rte_extmem_register;
- rte_extmem_unregister;
-
# added in 19.05
rte_dev_dma_map;
rte_dev_dma_unmap;
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v1 1/6] build: increase default of max lcores to 512
2021-09-10 8:06 3% ` David Marchand
@ 2021-09-10 8:24 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-09-10 8:24 UTC (permalink / raw)
To: Bruce Richardson, David Hunt, David Marchand; +Cc: dev
10/09/2021 10:06, David Marchand:
> On Fri, Sep 10, 2021 at 9:54 AM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Fri, Sep 10, 2021 at 08:51:04AM +0200, David Marchand wrote:
> > > On Thu, Sep 9, 2021 at 4:38 PM Bruce Richardson
> > > <bruce.richardson@intel.com> wrote:
> > > >
> > > > On Thu, Sep 09, 2021 at 02:45:06PM +0100, David Hunt wrote:
> > > > > Modern processors are coming with an ever increasing number of cores,
> > > > > and 128 does not seem like a sensible max limit any more, especially
> > > > > when you consider multi-socket systems with Hyper-Threading enabled.
> > > > >
> > > > > This patch increases max_lcores default from 128 to 512.
> > > > >
> > > > > Signed-off-by: David Hunt <david.hunt@intel.com>
> > >
> > > Why should we need this?
> > >
> > > --lcores makes it possible to pin 128 lcores to any physical core on
> > > your system.
> > > And for applications that have their own thread management, they can
> > > pin thread, then use rte_thread_register.
> > >
> > > Do you have applications that require more than 128 lcores?
> > >
> > The trouble is that using the --lcores syntax for mapping high core numbers
> > to low lcore ids is much more awkward to use. Every case of DPDK use I've
> > seen uses -c with a coremask, or -l with just giving a few core numbers on
> > it. This simple scheme won't work with core numbers greater than 128, and
> > there are already systems available with more than that number of cores.
> >
> > Apart from the memory footprint issues - which this patch is already making
> > a good start in addressing, why would we not increase the default
> > max_lcores to that seen on real systems?
>
> The memory footprint is a major issue to me, and reserving all those
> lcores won't be needed in any system.
> We will also have to decide on a "640k ought to be enough" value to
> avoid ABI issue with the next processor that comes out and has more
> than 512 cores.
>
> Could we wire the -c / -l options to --lcores behavior ?
> It breaks the 1:1 lcore/physical core assumption, but it solves your
> usability issue.
Why would we change existing options while we already have an option
(--lcores) which solves the issue above?
I think the only issue is to educate users.
Is there something to improve in the documentation?
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 1/6] build: increase default of max lcores to 512
@ 2021-09-10 8:06 3% ` David Marchand
2021-09-10 8:24 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-09-10 8:06 UTC (permalink / raw)
To: Bruce Richardson; +Cc: David Hunt, dev, Thomas Monjalon
On Fri, Sep 10, 2021 at 9:54 AM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Fri, Sep 10, 2021 at 08:51:04AM +0200, David Marchand wrote:
> > On Thu, Sep 9, 2021 at 4:38 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > >
> > > On Thu, Sep 09, 2021 at 02:45:06PM +0100, David Hunt wrote:
> > > > Modern processors are coming with an ever increasing number of cores,
> > > > and 128 does not seem like a sensible max limit any more, especially
> > > > when you consider multi-socket systems with Hyper-Threading enabled.
> > > >
> > > > This patch increases max_lcores default from 128 to 512.
> > > >
> > > > Signed-off-by: David Hunt <david.hunt@intel.com>
> >
> > Why should we need this?
> >
> > --lcores makes it possible to pin 128 lcores to any physical core on
> > your system.
> > And for applications that have their own thread management, they can
> > pin thread, then use rte_thread_register.
> >
> > Do you have applications that require more than 128 lcores?
> >
> The trouble is that using the --lcores syntax for mapping high core numbers
> to low lcore ids is much more awkward to use. Every case of DPDK use I've
> seen uses -c with a coremask, or -l with just giving a few core numbers on
> it. This simple scheme won't work with core numbers greater than 128, and
> there are already systems available with more than that number of cores.
>
> Apart from the memory footprint issues - which this patch is already making
> a good start in addressing, why would we not increase the default
> max_lcores to that seen on real systems?
The memory footprint is a major issue to me, and reserving all those
lcores won't be needed in any system.
We will also have to decide on a "640k ought to be enough" value to
avoid ABI issue with the next processor that comes out and has more
than 512 cores.
Could we wire the -c / -l options to --lcores behavior ?
It breaks the 1:1 lcore/physical core assumption, but it solves your
usability issue.
--
David Marchand
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-09-03 7:14 3% ` David Marchand
@ 2021-09-10 3:47 0% ` Apeksha Gupta
0 siblings, 0 replies; 200+ results
From: Apeksha Gupta @ 2021-09-10 3:47 UTC (permalink / raw)
To: David Marchand
Cc: Andrew Rybchenko, Yigit, Ferruh, dev, Hemant Agrawal, Sachin Saxena
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Friday, September 3, 2021 12:45 PM
> To: Apeksha Gupta <apeksha.gupta@nxp.com>
> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; dev <dev@dpdk.org>; Hemant Agrawal
> <hemant.agrawal@nxp.com>; Sachin Saxena <sachin.saxena@nxp.com>
> Subject: [EXT] Re: [dpdk-dev] [PATCH v2 1/5] net/enetfec: introduce NXP
> ENETFEC driver
>
> Caution: EXT Email
>
> On Thu, Sep 2, 2021 at 8:01 PM Apeksha Gupta <apeksha.gupta@nxp.com>
> wrote:
> >
> > ENETFEC (Fast Ethernet Controller) is a network poll mode driver
> > for NXP SoC i.MX 8M Mini.
> >
> > This patch adds skeleton for enetfec driver with probe function.
> >
> > Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> > Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> > ---
> > doc/guides/nics/enetfec.rst | 121 ++++++++++++++++++++
> > doc/guides/nics/features/enetfec.ini | 8 ++
> > doc/guides/nics/index.rst | 1 +
> > drivers/net/enetfec/enet_ethdev.c | 95 ++++++++++++++++
> > drivers/net/enetfec/enet_ethdev.h | 160 +++++++++++++++++++++++++++
> > drivers/net/enetfec/enet_pmd_logs.h | 31 ++++++
> > drivers/net/enetfec/meson.build | 15 +++
> > drivers/net/enetfec/version.map | 3 +
> > drivers/net/meson.build | 1 +
> > 9 files changed, 435 insertions(+)
> > create mode 100644 doc/guides/nics/enetfec.rst
> > create mode 100644 doc/guides/nics/features/enetfec.ini
> > create mode 100644 drivers/net/enetfec/enet_ethdev.c
> > create mode 100644 drivers/net/enetfec/enet_ethdev.h
> > create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
> > create mode 100644 drivers/net/enetfec/meson.build
> > create mode 100644 drivers/net/enetfec/version.map
>
> Please update MAINTAINERS and release notes in this first patch.
[Apeksha] okay, added in v3.
>
> >
> > diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
> > new file mode 100644
> > index 0000000000..f151bb26c4
> > --- /dev/null
> > +++ b/doc/guides/nics/enetfec.rst
> > @@ -0,0 +1,121 @@
> > +.. SPDX-License-Identifier: BSD-3-Clause
> > + Copyright 2021 NXP
> > +
> > +ENETFEC Poll Mode Driver
> > +========================
> > +
> > +The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver
> > +support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
> > +
> > +More information can be found at NXP Official Website
> >
> +<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.
> nxp.com%2Fproducts%2Fprocessors-and-microcontrollers%2Farm-
> processors%2Fi-mx-applications-processors%2Fi-mx-8-processors%2Fi-mx-8m-
> mini-arm-cortex-a53-cortex-m4-audio-voice-
> video%3Ai.MX8MMINI&data=04%7C01%7Capeksha.gupta%40nxp.com%7
> Ca965cce583974ae4b49408d96eaa8a58%7C686ea1d3bc2b4c6fa92cd99c5c301
> 635%7C0%7C1%7C637662501011576537%7CUnknown%7CTWFpbGZsb3d8eyJ
> WIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C
> 1000&sdata=2iLDhMwWZvZjskijEk6IGUPNLzw8D0xhPsvwJe305ng%3D&am
> p;reserved=0>
> > +
> > +ENETFEC
> > +-------
> > +
> > +This section provides an overview of the NXP ENETFEC and how it is
> > +integrated into the DPDK.
> > +
> > +Contents summary
> > +
> > +- ENETFEC overview
> > +- ENETFEC features
> > +- Supported ENETFEC SoCs
> > +- Prerequisites
> > +- Driver compilation and testing
> > +- Limitations
> > +
> > +ENETFEC Overview
> > +~~~~~~~~~~~~~~~~
> > +The i.MX 8M Mini Media Applications Processor is built to achieve both high
> > +performance and low power consumption. ENETFEC is a hardware
> programmable
> > +packet forwarding engine to provide high performance Ethernet interface.
> > +The diagram below shows a system level overview of ENETFEC:
> > +
> > +
> ====================================================+============
> ===
> > + US +-----------------------------------------+ | Kernel Space
> > + | | |
> > + | ENETFEC Driver | |
> > + +-----------------------------------------+ |
> > + ^ | |
> > + ENETFEC RXQ | | TXQ |
> > + PMD | | |
> > + | v | +----------+
>
> Misalignment of the right part.
>
>
> > + +-------------+ | | fec-uio |
>
> What is fec-uio?
> I can't find it in upstream linux kernel.
[Apeksha] fec-uio is the kernel driver. We have also initiated the process of upstreaming it.
>
>
> > + | net_enetfec | | +----------+
> > + +-------------+ |
> > + ^ | |
> > + TXQ | | RXQ |
> > + | | |
> > + | v |
> > +
> ===================================================+=============
> ==
> > + +----------------------------------------+
> > + | | HW
> > + | i.MX 8M MINI EVK |
> > + | +-----+ |
> > + | | MAC | |
> > + +---------------+-----+------------------+
> > + | PHY |
> > + +-----+
>
> Misalignment.
>
>
> > +
> > +ENETFEC Ethernet driver is traditional DPDK PMD driver running in the
> userspace.
> > +The MAC and PHY are the hardware blocks. 'fec-uio' is the UIO driver,
> ENETFEC PMD
> > +uses UIO interface to interact with kernel for PHY initialisation and for
> mapping
> > +the allocated memory of register & BD in kernel with DPDK which gives
> access to
> > +non-cacheable memory for BD. net_enetfec is logical Ethernet interface,
> created by
> > +ENETFEC driver.
> > +
> > +- ENETFEC driver registers the device in virtual device driver.
> > +- RTE framework scans and will invoke the probe function of ENETFEC driver.
> > +- The probe function will set the basic device registers and also setups BD
> rings.
> > +- On packet Rx the respective BD Ring status bit is set which is then used for
> > + packet processing.
> > +- Then Tx is done first followed by Rx via logical interfaces.
> > +
> > +ENETFEC Features
> > +~~~~~~~~~~~~~~~~~
> > +
> > +- ARMv8
> > +
> > +Supported ENETFEC SoCs
> > +~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +- i.MX 8M Mini
> > +
> > +Prerequisites
> > +~~~~~~~~~~~~~
> > +
> > +There are three main pre-requisites for executing ENETFEC PMD on a i.MX
> 8M Mini
> > +compatible board:
> > +
> > +1. **ARM 64 Tool Chain**
> > +
> > + For example, the `*aarch64* Linaro Toolchain
> <https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Freleases
> .linaro.org%2Fcomponents%2Ftoolchain%2Fbinaries%2F7.4-
> 2019.02%2Faarch64-linux-gnu%2Fgcc-linaro-7.4.1-2019.02-x86_64_aarch64-
> linux-
> gnu.tar.xz&data=04%7C01%7Capeksha.gupta%40nxp.com%7Ca965cce583
> 974ae4b49408d96eaa8a58%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C
> 1%7C637662501011586493%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjA
> wMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sda
> ta=mHBmrwivWNwEJiXBQTR8EAADgrZtd44ggXqJFyRJxTA%3D&reserved=0
> >`_.
> > +
> > +2. **Linux Kernel**
> > +
> > + It can be obtained from `NXP's Github hosting
> <https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsource.
> codeaurora.org%2Fexternal%2Fqoriq%2Fqoriq-
> components%2Flinux&data=04%7C01%7Capeksha.gupta%40nxp.com%7Ca
> 965cce583974ae4b49408d96eaa8a58%7C686ea1d3bc2b4c6fa92cd99c5c30163
> 5%7C0%7C1%7C637662501011586493%7CUnknown%7CTWFpbGZsb3d8eyJWIj
> oiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C100
> 0&sdata=4im4j9xK%2FlTxAa0aVyMFVwQ7Klmy3tnTaYSDtu9yVg8%3D&
> ;reserved=0>`_.
> > +
> > +3. **Rootfile system**
> > +
> > + Any *aarch64* supporting filesystem can be used. For example,
> > + Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be
> obtained
> > + from `here
> <https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcdimage.
> ubuntu.com%2Fubuntu-base%2Freleases%2F18.04%2Frelease%2Fubuntu-base-
> 18.04.1-base-
> arm64.tar.gz&data=04%7C01%7Capeksha.gupta%40nxp.com%7Ca965cce5
> 83974ae4b49408d96eaa8a58%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%
> 7C1%7C637662501011586493%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wL
> jAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&s
> data=jBnLNy7HsO%2BPVDOLY3tCizW%2Bvs%2FA86KjTTe73xBLGPY%3D&r
> eserved=0>`_.
> > +
> > +4. The Ethernet device will be registered as virtual device, so ENETFEC has
> dependency on
> > + **rte_bus_vdev** library and it is mandatory to use `--vdev` with value
> `net_enetfec` to
> > + run DPDK application.
> > +
> > +Driver compilation and testing
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Follow instructions available in the document
> > +:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> > +to launch **testpmd**
>
> dpdk-testpmd.
>
>
> > +
> > +Limitations
> > +~~~~~~~~~~~
> > +
> > +- Multi queue is not supported.
> > +- Link status is down always.
> > +- Single Ethernet interface.
> > diff --git a/doc/guides/nics/features/enetfec.ini
> b/doc/guides/nics/features/enetfec.ini
> > new file mode 100644
> > index 0000000000..5700697981
> > --- /dev/null
> > +++ b/doc/guides/nics/features/enetfec.ini
> > @@ -0,0 +1,8 @@
> > +;
> > +; Supported features of the 'enetfec' network poll mode driver.
> > +;
> > +; Refer to default.ini for the full list of available PMD features.
> > +;
> > +[Features]
> > +ARMv8 = Y
> > +Usage doc = Y
> > diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
> > index 784d5d39f6..777fdab4a0 100644
> > --- a/doc/guides/nics/index.rst
> > +++ b/doc/guides/nics/index.rst
> > @@ -26,6 +26,7 @@ Network Interface Controller Drivers
> > e1000em
> > ena
> > enetc
> > + enetfec
> > enic
> > fm10k
> > hinic
> > diff --git a/drivers/net/enetfec/enet_ethdev.c
> b/drivers/net/enetfec/enet_ethdev.c
> > new file mode 100644
> > index 0000000000..88774788cf
> > --- /dev/null
> > +++ b/drivers/net/enetfec/enet_ethdev.c
> > @@ -0,0 +1,95 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#include <stdio.h>
> > +#include <fcntl.h>
> > +#include <stdlib.h>
> > +#include <unistd.h>
> > +#include <errno.h>
> > +#include <rte_kvargs.h>
> > +#include <ethdev_vdev.h>
> > +#include <rte_bus_vdev.h>
> > +#include <rte_dev.h>
> > +#include <rte_ether.h>
> > +#include "enet_ethdev.h"
> > +#include "enet_pmd_logs.h"
> > +
> > +#define ENETFEC_NAME_PMD net_enetfec
> > +#define ENETFEC_VDEV_GEM_ID_ARG "intf"
> > +#define ENETFEC_CDEV_INVALID_FD -1
> > +
> > +int enetfec_logtype_pmd;
>
> If using RTE_LOG_REGISTER* macros (see below), you don't need this definition.
[Apeksha] okay.
>
>
> > +
> > +static int
> > +enetfec_eth_init(struct rte_eth_dev *dev)
> > +{
> > + rte_eth_dev_probing_finish(dev);
> > + return 0;
> > +}
> > +
> > +static int
> > +pmd_enetfec_probe(struct rte_vdev_device *vdev)
> > +{
> > + struct rte_eth_dev *dev = NULL;
> > + struct enetfec_private *fep;
> > + const char *name;
> > + int rc;
> > +
> > + name = rte_vdev_device_name(vdev);
> > + if (name == NULL)
> > + return -EINVAL;
> > + ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
> > +
> > + dev = rte_eth_vdev_allocate(vdev, sizeof(*fep));
> > + if (dev == NULL)
> > + return -ENOMEM;
> > +
> > + /* setup board info structure */
> > + fep = dev->data->dev_private;
> > + fep->dev = dev;
> > + rc = enetfec_eth_init(dev);
> > + if (rc)
> > + goto failed_init;
> > +
> > + return 0;
> > +
> > +failed_init:
> > + ENETFEC_PMD_ERR("Failed to init");
> > + return rc;
> > +}
> > +
> > +static int
> > +pmd_enetfec_remove(struct rte_vdev_device *vdev)
> > +{
> > + struct rte_eth_dev *eth_dev = NULL;
> > + int ret;
> > +
> > + /* find the ethdev entry */
> > + eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
> > + if (eth_dev == NULL)
> > + return -ENODEV;
> > +
> > + ret = rte_eth_dev_release_port(eth_dev);
> > + if (ret != 0)
> > + return -EINVAL;
> > +
> > + ENETFEC_PMD_INFO("Closing sw device");
> > + return 0;
> > +}
> > +
> > +static struct rte_vdev_driver pmd_enetfec_drv = {
> > + .probe = pmd_enetfec_probe,
> > + .remove = pmd_enetfec_remove,
> > +};
> > +
> > +RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv);
> > +RTE_PMD_REGISTER_PARAM_STRING(ENETFEC_NAME_PMD,
> ENETFEC_VDEV_GEM_ID_ARG "=<int>");
> > +
> > +RTE_INIT(enetfec_pmd_init_log)
> > +{
> > + int ret;
> > + ret = rte_log_register_type_and_pick_level(ENETFEC_LOGTYPE_PREFIX
> "driver",
> > + RTE_LOG_NOTICE);
> > + enetfec_logtype_pmd = (ret < 0) ? RTE_LOGTYPE_PMD : ret;
> > +}
>
> Please use RTE_LOG_REGISTER_DEFAULT.
>
>
> > diff --git a/drivers/net/enetfec/enet_ethdev.h
> b/drivers/net/enetfec/enet_ethdev.h
> > new file mode 100644
> > index 0000000000..8c61176fb5
> > --- /dev/null
> > +++ b/drivers/net/enetfec/enet_ethdev.h
> > @@ -0,0 +1,160 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#ifndef __ENETFEC_ETHDEV_H__
> > +#define __ENETFEC_ETHDEV_H__
> > +
> > +#include <compat.h>
> > +#include <rte_ethdev.h>
> > +
> > +/* Common log type name prefix */
> > +#define ENETFEC_LOGTYPE_PREFIX "pmd.net.enetfec."
> > +
> > +/*
> > + * ENETFEC with AVB IP can support maximum 3 rx and tx queues.
> > + */
> > +#define ENETFEC_MAX_Q 3
> > +
> > +#define BD_LEN 49152
> > +#define ENETFEC_TX_FR_SIZE 2048
> > +#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */
> > +#define MAX_RX_BD_RING_SIZE 512
> > +
> > +/* full duplex or half duplex */
> > +#define HALF_DUPLEX 0x00
> > +#define FULL_DUPLEX 0x01
> > +#define UNKNOWN_DUPLEX 0xff
> > +
> > +#define PKT_MAX_BUF_SIZE 1984
> > +#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
> > +#define ETH_ALEN RTE_ETHER_ADDR_LEN
> > +#define ETH_HLEN RTE_ETHER_HDR_LEN
> > +#define VLAN_HLEN 4
> > +
> > +struct bufdesc {
> > + uint16_t bd_datlen; /* buffer data length */
> > + uint16_t bd_sc; /* buffer control & status */
> > + uint32_t bd_bufaddr; /* buffer address */
> > +};
> > +
> > +struct bufdesc_ex {
> > + struct bufdesc desc;
> > + uint32_t bd_esc;
> > + uint32_t bd_prot;
> > + uint32_t bd_bdu;
> > + uint32_t ts;
> > + uint16_t res0[4];
> > +};
> > +
> > +struct bufdesc_prop {
> > + int que_id;
> > + /* Addresses of Tx and Rx buffers */
> > + struct bufdesc *base;
> > + struct bufdesc *last;
> > + struct bufdesc *cur;
> > + void __iomem *active_reg_desc;
> > + uint64_t descr_baseaddr_p;
> > + unsigned short ring_size;
> > + unsigned char d_size;
> > + unsigned char d_size_log2;
> > +};
> > +
> > +struct enetfec_priv_tx_q {
> > + struct bufdesc_prop bd;
> > + struct rte_mbuf *tx_mbuf[MAX_TX_BD_RING_SIZE];
> > + struct bufdesc *dirty_tx;
> > + struct rte_mempool *pool;
> > + struct enetfec_private *fep;
> > +};
> > +
> > +struct enetfec_priv_rx_q {
> > + struct bufdesc_prop bd;
> > + struct rte_mbuf *rx_mbuf[MAX_RX_BD_RING_SIZE];
> > + struct rte_mempool *pool;
> > + struct enetfec_private *fep;
> > +};
> > +
> > +/* Buffer descriptors of FEC are used to track the ring buffers. Buffer
> > + * descriptor base is x_bd_base. Currently available buffer are x_cur
> > + * and x_cur. where x is rx or tx. Current buffer is tracked by dirty_tx
> > + * that is sent by the controller.
> > + * The tx_cur and dirty_tx are same in completely full and empty
> > + * conditions. Actual condition is determined by empty & ready bits.
> > + */
> > +struct enetfec_private {
> > + struct rte_eth_dev *dev;
> > + struct rte_eth_stats stats;
> > + struct rte_mempool *pool;
> > + uint16_t max_rx_queues;
> > + uint16_t max_tx_queues;
> > + unsigned int total_tx_ring_size;
> > + unsigned int total_rx_ring_size;
> > + bool bufdesc_ex;
> > + unsigned int tx_align;
> > + unsigned int rx_align;
> > + int full_duplex;
> > + unsigned int phy_speed;
> > + u_int32_t quirks;
> > + int flag_csum;
> > + int flag_pause;
> > + int flag_wol;
> > + bool rgmii_txc_delay;
> > + bool rgmii_rxc_delay;
> > + int link;
> > + void *hw_baseaddr_v;
> > + uint64_t hw_baseaddr_p;
> > + void *bd_addr_v;
> > + uint64_t bd_addr_p;
> > + uint64_t bd_addr_p_r[ENETFEC_MAX_Q];
> > + uint64_t bd_addr_p_t[ENETFEC_MAX_Q];
> > + void *dma_baseaddr_r[ENETFEC_MAX_Q];
> > + void *dma_baseaddr_t[ENETFEC_MAX_Q];
> > + uint64_t cbus_size;
> > + unsigned int reg_size;
> > + unsigned int bd_size;
> > + int hw_ts_rx_en;
> > + int hw_ts_tx_en;
> > + struct enetfec_priv_rx_q *rx_queues[ENETFEC_MAX_Q];
> > + struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q];
> > +};
> > +
> > +#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
> > +#define readl(p) rte_read32(p)
> > +
> > +static inline struct
> > +bufdesc *enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
> > +{
> > + return (bdp >= bd->last) ? bd->base
> > + : (struct bufdesc *)(((void *)bdp) + bd->d_size);
> > +}
> > +
> > +static inline struct
> > +bufdesc *enet_get_prevdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
> > +{
> > + return (bdp <= bd->base) ? bd->last
> > + : (struct bufdesc *)(((void *)bdp) - bd->d_size);
> > +}
> > +
> > +static inline int
> > +enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
> > +{
> > + return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
> > +}
> > +
> > +static inline int
> > +fls64(unsigned long word)
> > +{
> > + return (64 - __builtin_clzl(word)) - 1;
> > +}
> > +
> > +uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf
> **rx_pkts,
> > + uint16_t nb_pkts);
> > +uint16_t enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> > + uint16_t nb_pkts);
> > +struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp,
> > + struct bufdesc_prop *bd);
> > +int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp,
> > + struct rte_mbuf *mbuf);
> > +
> > +#endif /*__ENETFEC_ETHDEV_H__*/
> > diff --git a/drivers/net/enetfec/enet_pmd_logs.h
> b/drivers/net/enetfec/enet_pmd_logs.h
> > new file mode 100644
> > index 0000000000..e7b3964a0e
> > --- /dev/null
> > +++ b/drivers/net/enetfec/enet_pmd_logs.h
> > @@ -0,0 +1,31 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#ifndef _ENETFEC_LOGS_H_
> > +#define _ENETFEC_LOGS_H_
> > +
> > +extern int enetfec_logtype_pmd;
> > +
> > +/* PMD related logs */
> > +#define ENETFEC_PMD_LOG(level, fmt, args...) \
> > + rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "\nfec_net: %s()" \
> > + fmt "\n", __func__, ##args)
> > +
> > +#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>")
> > +
> > +#define ENETFEC_PMD_DEBUG(fmt, args...) \
> > + ENETFEC_PMD_LOG(DEBUG, fmt, ## args)
> > +#define ENETFEC_PMD_ERR(fmt, args...) \
> > + ENETFEC_PMD_LOG(ERR, fmt, ## args)
> > +#define ENETFEC_PMD_INFO(fmt, args...) \
> > + ENETFEC_PMD_LOG(INFO, fmt, ## args)
> > +
> > +#define ENETFEC_PMD_WARN(fmt, args...) \
> > + ENETFEC_PMD_LOG(WARNING, fmt, ## args)
> > +
> > +/* DP Logs, toggled out at compile time if level lower than current level */
> > +#define ENETFEC_DP_LOG(level, fmt, args...) \
> > + RTE_LOG_DP(level, PMD, fmt, ## args)
> > +
> > +#endif /* _ENETFEC_LOGS_H_ */
> > diff --git a/drivers/net/enetfec/meson.build
> b/drivers/net/enetfec/meson.build
> > new file mode 100644
> > index 0000000000..252bf83309
> > --- /dev/null
> > +++ b/drivers/net/enetfec/meson.build
> > @@ -0,0 +1,15 @@
> > +# SPDX-License-Identifier: BSD-3-Clause
> > +# Copyright 2021 NXP
> > +
> > +if not is_linux
> > + build = false
> > + reason = 'only supported on linux'
>
> The doc specifies that only armv8 is supported.
>
>
> > +endif
> > +
> > +deps += ['common_dpaax']
> > +
> > +sources = files('enet_ethdev.c')
> > +
> > +if cc.has_argument('-Wno-pointer-arith')
> > + cflags += '-Wno-pointer-arith'
> > +endif
>
> Can't this be fixed in the code rather than ignored?
>
>
> > diff --git a/drivers/net/enetfec/version.map
> b/drivers/net/enetfec/version.map
> > new file mode 100644
> > index 0000000000..170c04fe53
> > --- /dev/null
> > +++ b/drivers/net/enetfec/version.map
> > @@ -0,0 +1,3 @@
> > +DPDK_20.0 {
>
> Wrong ABI version.
>
>
> > + local: *;
> > +};
> > diff --git a/drivers/net/meson.build b/drivers/net/meson.build
> > index bcf488f203..92f433d5e8 100644
> > --- a/drivers/net/meson.build
> > +++ b/drivers/net/meson.build
> > @@ -19,6 +19,7 @@ drivers = [
> > 'e1000',
> > 'ena',
> > 'enetc',
> > + 'enetfec',
>
> Indent.
>
>
> > 'enic',
> > 'failsafe',
> > 'fm10k',
> > --
> > 2.17.1
> >
>
> --
> David Marchand
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH 8/8] bus/pci: remove ABIs in PCI bus
2021-09-10 2:24 3% ` [dpdk-dev] [PATCH 7/8] kni: replace unused variable definition with reserved bytes Chenbo Xia
@ 2021-09-10 2:24 2% ` Chenbo Xia
1 sibling, 0 replies; 200+ results
From: Chenbo Xia @ 2021-09-10 2:24 UTC (permalink / raw)
To: dev
Cc: Nicolas Chautru, Ferruh Yigit, Anatoly Burakov, Ray Kinsella,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko, Jerin Jacob,
Anoob Joseph, Fiona Trahe, John Griffin, Deepak Kumar Jain,
Andrew Rybchenko, Ashish Gupta, Somalapuram Amaranath,
Ankur Dwivedi, Tejasree Kondoj, Nagadheeraj Rottela,
Srikanth Jampala, Jay Zhou, Timothy McDaniel, Pavan Nikhilesh,
Ashwin Sekhar T K, Harman Kalra, Shepard Siegel, Ed Czeck,
John Miller, Steven Webster, Matt Peters, Rasesh Mody,
Shahed Shaikh, Ajit Khaparde, Somnath Kotur, Chas Williams,
Min Hu (Connor),
Rahul Lakkireddy, Haiyue Wang, Marcin Wojtas, Michal Krawczyk,
Shai Brandes, Evgeny Schemeilin, Igor Chauskin, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Andrew Boyer, Rosen Xu,
Stephen Hemminger, Long Li, Devendra Singh Rawat, Maciej Czekaj,
Jiawen Wu, Jian Wang, Maxime Coquelin, Yong Wang, Jakub Palider,
Tomasz Duszynski, Tianfei zhang, Bruce Richardson, Xiaoyun Li,
Jingjing Wu, Radha Mohan Chintakuntla, Veerasenareddy Burru,
Ori Kam, Xiao Wang, Thomas Monjalon
As announced in the deprecation note, most of ABIs in PCI bus are
removed in this patch. Only the function rte_pci_dump is still ABI
and experimental APIs are kept for future promotion.
This patch creates a new file named pci_driver.h and moves most of
the content in original rte_bus_pci.h to it. After that, pci_driver.h
is considered the interface for drivers and rte_bus_pci.h for
applications. pci_driver.h is defined as driver_sdk_headers so that
out-of-tree drivers can use it.
Then this patch replaces the including of rte_bus_pci.h with pci_driver.h
in all related drivers.
Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
---
app/test/virtual_pmd.c | 2 +-
doc/guides/rel_notes/release_21_11.rst | 2 +
drivers/baseband/acc100/rte_acc100_pmd.c | 2 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 2 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 2 +-
drivers/bus/pci/bsd/pci.c | 1 -
drivers/bus/pci/linux/pci.c | 1 -
drivers/bus/pci/linux/pci_uio.c | 1 -
drivers/bus/pci/linux/pci_vfio.c | 1 -
drivers/bus/pci/meson.build | 4 +
drivers/bus/pci/pci_common_uio.c | 1 -
drivers/bus/pci/pci_driver.h | 402 ++++++++++++++++++
drivers/bus/pci/pci_params.c | 1 -
drivers/bus/pci/private.h | 3 +-
drivers/bus/pci/rte_bus_pci.h | 375 +---------------
drivers/bus/pci/version.map | 32 +-
drivers/common/cnxk/roc_platform.h | 2 +-
drivers/common/mlx5/linux/mlx5_common_verbs.c | 2 +-
drivers/common/mlx5/mlx5_common_pci.c | 2 +-
drivers/common/octeontx2/otx2_dev.h | 2 +-
drivers/common/octeontx2/otx2_sec_idev.c | 2 +-
drivers/common/qat/qat_device.h | 2 +-
drivers/common/qat/qat_qp.c | 2 +-
drivers/common/sfc_efx/sfc_efx.h | 2 +-
drivers/compress/mlx5/mlx5_compress.c | 2 +-
drivers/compress/octeontx/otx_zip.h | 2 +-
drivers/compress/qat/qat_comp.c | 2 +-
drivers/crypto/ccp/ccp_dev.h | 2 +-
drivers/crypto/ccp/ccp_pci.h | 2 +-
drivers/crypto/ccp/rte_ccp_pmd.c | 2 +-
drivers/crypto/cnxk/cn10k_cryptodev.c | 2 +-
drivers/crypto/cnxk/cn9k_cryptodev.c | 2 +-
drivers/crypto/mlx5/mlx5_crypto.c | 2 +-
drivers/crypto/nitrox/nitrox_device.h | 2 +-
drivers/crypto/octeontx/otx_cryptodev.c | 2 +-
drivers/crypto/octeontx/otx_cryptodev_ops.c | 2 +-
drivers/crypto/octeontx2/otx2_cryptodev.c | 2 +-
drivers/crypto/qat/qat_sym.c | 2 +-
drivers/crypto/qat/qat_sym_pmd.c | 2 +-
drivers/crypto/virtio/virtio_cryptodev.c | 2 +-
drivers/crypto/virtio/virtio_pci.h | 2 +-
drivers/event/dlb2/pf/dlb2_main.h | 2 +-
drivers/event/dlb2/pf/dlb2_pf.c | 2 +-
drivers/event/octeontx/ssovf_probe.c | 2 +-
drivers/event/octeontx/timvf_probe.c | 2 +-
drivers/event/octeontx2/otx2_evdev.c | 2 +-
drivers/mempool/cnxk/cnxk_mempool.c | 2 +-
drivers/mempool/octeontx/octeontx_fpavf.c | 2 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/mempool/octeontx2/otx2_mempool.h | 2 +-
drivers/mempool/octeontx2/otx2_mempool_irq.c | 2 +-
drivers/meson.build | 4 +
drivers/net/ark/ark_ethdev.c | 2 +-
drivers/net/avp/avp_ethdev.c | 2 +-
drivers/net/bnx2x/bnx2x.h | 2 +-
drivers/net/bnxt/bnxt.h | 2 +-
drivers/net/bonding/rte_eth_bond_args.c | 2 +-
drivers/net/cxgbe/base/adapter.h | 2 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 2 +-
drivers/net/e1000/em_ethdev.c | 2 +-
drivers/net/e1000/em_rxtx.c | 2 +-
drivers/net/e1000/igb_ethdev.c | 2 +-
drivers/net/e1000/igb_pf.c | 2 +-
drivers/net/ena/ena_ethdev.h | 2 +-
drivers/net/enic/base/vnic_dev.h | 2 +-
drivers/net/enic/enic_ethdev.c | 2 +-
drivers/net/enic/enic_main.c | 2 +-
drivers/net/enic/enic_vf_representor.c | 2 +-
drivers/net/hinic/base/hinic_pmd_hwdev.c | 2 +-
drivers/net/hinic/base/hinic_pmd_hwif.c | 2 +-
drivers/net/hinic/base/hinic_pmd_nicio.c | 2 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 2 +-
drivers/net/hns3/hns3_ethdev.c | 2 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 2 +-
drivers/net/i40e/i40e_ethdev_vf.c | 2 +-
drivers/net/i40e/i40e_vf_representor.c | 2 +-
drivers/net/igc/igc_ethdev.c | 2 +-
drivers/net/ionic/ionic.h | 2 +-
drivers/net/ionic/ionic_ethdev.c | 2 +-
drivers/net/ipn3ke/ipn3ke_ethdev.c | 2 +-
drivers/net/ipn3ke/ipn3ke_representor.c | 2 +-
drivers/net/ipn3ke/ipn3ke_tm.c | 2 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 2 +-
drivers/net/ixgbe/ixgbe_ethdev.h | 2 +-
drivers/net/mlx4/mlx4_ethdev.c | 2 +-
drivers/net/mlx5/linux/mlx5_ethdev_os.c | 2 +-
drivers/net/mlx5/linux/mlx5_os.c | 2 +-
drivers/net/mlx5/mlx5.c | 2 +-
drivers/net/mlx5/mlx5_ethdev.c | 2 +-
drivers/net/mlx5/mlx5_txq.c | 2 +-
drivers/net/netvsc/hn_vf.c | 2 +-
drivers/net/octeontx/base/octeontx_pkivf.c | 2 +-
drivers/net/octeontx/base/octeontx_pkovf.c | 2 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 2 +-
drivers/net/qede/base/bcm_osal.h | 2 +-
drivers/net/sfc/sfc.h | 2 +-
drivers/net/sfc/sfc_ethdev.c | 2 +-
drivers/net/sfc/sfc_sriov.c | 2 +-
drivers/net/thunderx/nicvf_ethdev.c | 2 +-
drivers/net/txgbe/txgbe_ethdev.h | 2 +-
drivers/net/txgbe/txgbe_flow.c | 2 +-
drivers/net/txgbe/txgbe_pf.c | 2 +-
drivers/net/virtio/virtio_pci.h | 2 +-
drivers/net/virtio/virtio_pci_ethdev.c | 2 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 2 +-
drivers/raw/cnxk_bphy/cnxk_bphy.c | 2 +-
drivers/raw/cnxk_bphy/cnxk_bphy_cgx.c | 2 +-
drivers/raw/cnxk_bphy/cnxk_bphy_irq.c | 2 +-
drivers/raw/cnxk_bphy/cnxk_bphy_irq.h | 2 +-
drivers/raw/ifpga/ifpga_rawdev.c | 2 +-
drivers/raw/ifpga/rte_pmd_ifpga.c | 2 +-
drivers/raw/ioat/idxd_pci.c | 2 +-
drivers/raw/ioat/ioat_rawdev.c | 2 +-
drivers/raw/ntb/ntb.c | 2 +-
drivers/raw/ntb/ntb_hw_intel.c | 2 +-
drivers/raw/octeontx2_dma/otx2_dpi_rawdev.c | 2 +-
drivers/raw/octeontx2_ep/otx2_ep_enqdeq.c | 2 +-
drivers/raw/octeontx2_ep/otx2_ep_rawdev.c | 2 +-
drivers/regex/mlx5/mlx5_regex.c | 2 +-
drivers/regex/mlx5/mlx5_regex_fastpath.c | 2 +-
drivers/vdpa/ifc/base/ifcvf_osdep.h | 2 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 2 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 2 +-
lib/ethdev/ethdev_pci.h | 2 +-
lib/eventdev/eventdev_pmd_pci.h | 2 +-
126 files changed, 546 insertions(+), 508 deletions(-)
create mode 100644 drivers/bus/pci/pci_driver.h
diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c
index 7036f401ed..555f2969ab 100644
--- a/app/test/virtual_pmd.c
+++ b/app/test/virtual_pmd.c
@@ -6,7 +6,7 @@
#include <rte_ethdev.h>
#include <ethdev_driver.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_malloc.h>
#include <rte_memcpy.h>
#include <rte_memory.h>
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 1c9abb74ec..8c24eb3235 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -112,6 +112,8 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* pci: Removed all ABIs defined in rte_bus_pci.h except the function
+ ``rte_pci_dump()``.
Known Issues
------------
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 68ba523ea9..72734784c9 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -14,7 +14,7 @@
#include <rte_branch_prediction.h>
#include <rte_hexdump.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#ifdef RTE_BBDEV_OFFLOAD_COST
#include <rte_cycles.h>
#endif
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6485cc824a..31cb7e5605 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -11,7 +11,7 @@
#include <rte_mempool.h>
#include <rte_errno.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_byteorder.h>
#ifdef RTE_BBDEV_OFFLOAD_COST
#include <rte_cycles.h>
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 350c4248eb..0dc1417dde 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -11,7 +11,7 @@
#include <rte_mempool.h>
#include <rte_errno.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_byteorder.h>
#ifdef RTE_BBDEV_OFFLOAD_COST
#include <rte_cycles.h>
diff --git a/drivers/bus/pci/bsd/pci.c b/drivers/bus/pci/bsd/pci.c
index d189bff311..b7d3a8df33 100644
--- a/drivers/bus/pci/bsd/pci.c
+++ b/drivers/bus/pci/bsd/pci.c
@@ -28,7 +28,6 @@
#include <rte_interrupts.h>
#include <rte_log.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
#include <rte_common.h>
#include <rte_launch.h>
#include <rte_memory.h>
diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index 4d261b55ee..8f91e0233c 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -8,7 +8,6 @@
#include <rte_log.h>
#include <rte_bus.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
#include <rte_malloc.h>
#include <rte_devargs.h>
#include <rte_memcpy.h>
diff --git a/drivers/bus/pci/linux/pci_uio.c b/drivers/bus/pci/linux/pci_uio.c
index 39ebeac2a0..de028e4874 100644
--- a/drivers/bus/pci/linux/pci_uio.c
+++ b/drivers/bus/pci/linux/pci_uio.c
@@ -19,7 +19,6 @@
#include <rte_string_fns.h>
#include <rte_log.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
#include <rte_common.h>
#include <rte_malloc.h>
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index a024269140..2ccd0fa101 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -13,7 +13,6 @@
#include <rte_log.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
#include <rte_eal_paging.h>
#include <rte_malloc.h>
#include <rte_vfio.h>
diff --git a/drivers/bus/pci/meson.build b/drivers/bus/pci/meson.build
index 81c7e94c00..33c09e2622 100644
--- a/drivers/bus/pci/meson.build
+++ b/drivers/bus/pci/meson.build
@@ -29,4 +29,8 @@ if is_windows
includes += include_directories('windows')
endif
+driver_sdk_headers += files(
+ 'pci_driver.h',
+)
+
deps += ['kvargs']
diff --git a/drivers/bus/pci/pci_common_uio.c b/drivers/bus/pci/pci_common_uio.c
index 318f9a1d55..00ef86c1dd 100644
--- a/drivers/bus/pci/pci_common_uio.c
+++ b/drivers/bus/pci/pci_common_uio.c
@@ -11,7 +11,6 @@
#include <rte_eal.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
#include <rte_tailq.h>
#include <rte_log.h>
#include <rte_malloc.h>
diff --git a/drivers/bus/pci/pci_driver.h b/drivers/bus/pci/pci_driver.h
new file mode 100644
index 0000000000..7a913d54c5
--- /dev/null
+++ b/drivers/bus/pci/pci_driver.h
@@ -0,0 +1,402 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2015 Intel Corporation.
+ * Copyright 2013-2014 6WIND S.A.
+ */
+
+#ifndef _PCI_DRIVER_H_
+#define _PCI_DRIVER_H_
+
+/**
+ * @file
+ * PCI device & driver interface
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <limits.h>
+#include <errno.h>
+#include <sys/queue.h>
+#include <stdint.h>
+#include <inttypes.h>
+
+#include <rte_debug.h>
+#include <rte_interrupts.h>
+#include <rte_dev.h>
+#include <rte_bus.h>
+#include <rte_pci.h>
+
+/** Pathname of PCI devices directory. */
+__rte_internal
+const char *rte_pci_get_sysfs_path(void);
+
+/* Forward declarations */
+struct rte_pci_device;
+struct rte_pci_driver;
+
+/** List of PCI devices */
+TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
+/** List of PCI drivers */
+TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
+
+/* PCI Bus iterators */
+#define FOREACH_DEVICE_ON_PCIBUS(p) \
+ TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
+
+#define FOREACH_DRIVER_ON_PCIBUS(p) \
+ TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
+
+struct rte_devargs;
+
+enum rte_pci_kernel_driver {
+ RTE_PCI_KDRV_UNKNOWN = 0, /* may be misc UIO or bifurcated driver */
+ RTE_PCI_KDRV_IGB_UIO, /* igb_uio for Linux */
+ RTE_PCI_KDRV_VFIO, /* VFIO for Linux */
+ RTE_PCI_KDRV_UIO_GENERIC, /* uio_pci_generic for Linux */
+ RTE_PCI_KDRV_NIC_UIO, /* nic_uio for FreeBSD */
+ RTE_PCI_KDRV_NONE, /* no attached driver */
+ RTE_PCI_KDRV_NET_UIO, /* NetUIO for Windows */
+};
+
+/**
+ * A structure describing a PCI device.
+ */
+struct rte_pci_device {
+ TAILQ_ENTRY(rte_pci_device) next; /**< Next probed PCI device. */
+ struct rte_device device; /**< Inherit core device */
+ struct rte_pci_addr addr; /**< PCI location. */
+ struct rte_pci_id id; /**< PCI ID. */
+ struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
+ /**< PCI Memory Resource */
+ struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_pci_driver *driver; /**< PCI driver used in probing */
+ uint16_t max_vfs; /**< sriov enable if not zero */
+ enum rte_pci_kernel_driver kdrv; /**< Kernel driver passthrough */
+ char name[PCI_PRI_STR_SIZE+1]; /**< PCI location (ASCII) */
+ struct rte_intr_handle vfio_req_intr_handle;
+ /**< Handler of VFIO request interrupt */
+};
+
+/**
+ * @internal
+ * Helper macro for drivers that need to convert to struct rte_pci_device.
+ */
+#define RTE_DEV_TO_PCI(ptr) container_of(ptr, struct rte_pci_device, device)
+
+#define RTE_DEV_TO_PCI_CONST(ptr) \
+ container_of(ptr, const struct rte_pci_device, device)
+
+#define RTE_ETH_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
+
+#ifdef __cplusplus
+/** C++ macro used to help building up tables of device IDs */
+#define RTE_PCI_DEVICE(vend, dev) \
+ RTE_CLASS_ANY_ID, \
+ (vend), \
+ (dev), \
+ RTE_PCI_ANY_ID, \
+ RTE_PCI_ANY_ID
+#else
+/** Macro used to help building up tables of device IDs */
+#define RTE_PCI_DEVICE(vend, dev) \
+ .class_id = RTE_CLASS_ANY_ID, \
+ .vendor_id = (vend), \
+ .device_id = (dev), \
+ .subsystem_vendor_id = RTE_PCI_ANY_ID, \
+ .subsystem_device_id = RTE_PCI_ANY_ID
+#endif
+
+/**
+ * Initialisation function for the driver called during PCI probing.
+ */
+typedef int (rte_pci_probe_t)(struct rte_pci_driver *, struct rte_pci_device *);
+
+/**
+ * Uninitialisation function for the driver called during hotplugging.
+ */
+typedef int (rte_pci_remove_t)(struct rte_pci_device *);
+
+/**
+ * Driver-specific DMA mapping. After a successful call the device
+ * will be able to read/write from/to this segment.
+ *
+ * @param dev
+ * Pointer to the PCI device.
+ * @param addr
+ * Starting virtual address of memory to be mapped.
+ * @param iova
+ * Starting IOVA address of memory to be mapped.
+ * @param len
+ * Length of memory segment being mapped.
+ * @return
+ * - 0 On success.
+ * - Negative value and rte_errno is set otherwise.
+ */
+typedef int (pci_dma_map_t)(struct rte_pci_device *dev, void *addr,
+ uint64_t iova, size_t len);
+
+/**
+ * Driver-specific DMA un-mapping. After a successful call the device
+ * will not be able to read/write from/to this segment.
+ *
+ * @param dev
+ * Pointer to the PCI device.
+ * @param addr
+ * Starting virtual address of memory to be unmapped.
+ * @param iova
+ * Starting IOVA address of memory to be unmapped.
+ * @param len
+ * Length of memory segment being unmapped.
+ * @return
+ * - 0 On success.
+ * - Negative value and rte_errno is set otherwise.
+ */
+typedef int (pci_dma_unmap_t)(struct rte_pci_device *dev, void *addr,
+ uint64_t iova, size_t len);
+
+/**
+ * A structure describing a PCI driver.
+ */
+struct rte_pci_driver {
+ TAILQ_ENTRY(rte_pci_driver) next; /**< Next in list. */
+ struct rte_driver driver; /**< Inherit core driver. */
+ struct rte_pci_bus *bus; /**< PCI bus reference. */
+ rte_pci_probe_t *probe; /**< Device probe function. */
+ rte_pci_remove_t *remove; /**< Device remove function. */
+ pci_dma_map_t *dma_map; /**< device dma map function. */
+ pci_dma_unmap_t *dma_unmap; /**< device dma unmap function. */
+ const struct rte_pci_id *id_table; /**< ID table, NULL terminated. */
+ uint32_t drv_flags; /**< Flags RTE_PCI_DRV_*. */
+};
+
+/**
+ * Structure describing the PCI bus
+ */
+struct rte_pci_bus {
+ struct rte_bus bus; /**< Inherit the generic class */
+ struct rte_pci_device_list device_list; /**< List of PCI devices */
+ struct rte_pci_driver_list driver_list; /**< List of PCI drivers */
+};
+
+/** Device needs PCI BAR mapping (done with either IGB_UIO or VFIO) */
+#define RTE_PCI_DRV_NEED_MAPPING 0x0001
+/** Device needs PCI BAR mapping with enabled write combining (wc) */
+#define RTE_PCI_DRV_WC_ACTIVATE 0x0002
+/** Device already probed can be probed again to check for new ports. */
+#define RTE_PCI_DRV_PROBE_AGAIN 0x0004
+/** Device driver supports link state interrupt */
+#define RTE_PCI_DRV_INTR_LSC 0x0008
+/** Device driver supports device removal interrupt */
+#define RTE_PCI_DRV_INTR_RMV 0x0010
+/** Device driver needs to keep mapped resources if unsupported dev detected */
+#define RTE_PCI_DRV_KEEP_MAPPED_RES 0x0020
+/** Device driver needs IOVA as VA and cannot work with IOVA as PA */
+#define RTE_PCI_DRV_NEED_IOVA_AS_VA 0x0040
+
+/**
+ * Map the PCI device resources in user space virtual memory address
+ *
+ * Note that driver should not call this function when flag
+ * RTE_PCI_DRV_NEED_MAPPING is set, as EAL will do that for
+ * you when it's on.
+ *
+ * @param dev
+ * A pointer to a rte_pci_device structure describing the device
+ * to use
+ *
+ * @return
+ * 0 on success, negative on error and positive if no driver
+ * is found for the device.
+ */
+__rte_internal
+int rte_pci_map_device(struct rte_pci_device *dev);
+
+/**
+ * Unmap this device
+ *
+ * @param dev
+ * A pointer to a rte_pci_device structure describing the device
+ * to use
+ */
+__rte_internal
+void rte_pci_unmap_device(struct rte_pci_device *dev);
+
+/**
+ * Find device's extended PCI capability.
+ *
+ * @param dev
+ * A pointer to rte_pci_device structure.
+ *
+ * @param cap
+ * Extended capability to be found, which can be any from
+ * RTE_PCI_EXT_CAP_ID_*, defined in librte_pci.
+ *
+ * @return
+ * > 0: The offset of the next matching extended capability structure
+ * within the device's PCI configuration space.
+ * < 0: An error in PCI config space read.
+ * = 0: Device does not support it.
+ */
+__rte_internal
+off_t rte_pci_find_ext_capability(struct rte_pci_device *dev, uint32_t cap);
+
+/**
+ * Enables/Disables Bus Master for device's PCI command register.
+ *
+ * @param dev
+ * A pointer to rte_pci_device structure.
+ * @param enable
+ * Enable or disable Bus Master.
+ *
+ * @return
+ * 0 on success, -1 on error in PCI config space read/write.
+ */
+__rte_internal
+int rte_pci_set_bus_master(struct rte_pci_device *dev, bool enable);
+
+/**
+ * Register a PCI driver.
+ *
+ * @param driver
+ * A pointer to a rte_pci_driver structure describing the driver
+ * to be registered.
+ */
+__rte_internal
+void rte_pci_register(struct rte_pci_driver *driver);
+
+/** Helper for PCI device registration from driver (eth, crypto) instance */
+#define RTE_PMD_REGISTER_PCI(nm, pci_drv) \
+RTE_INIT(pciinitfn_ ##nm) \
+{\
+ (pci_drv).driver.name = RTE_STR(nm);\
+ rte_pci_register(&pci_drv); \
+} \
+RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
+
+/**
+ * Unregister a PCI driver.
+ *
+ * @param driver
+ * A pointer to a rte_pci_driver structure describing the driver
+ * to be unregistered.
+ */
+__rte_internal
+void rte_pci_unregister(struct rte_pci_driver *driver);
+
+/**
+ * Read PCI config space.
+ *
+ * @param device
+ * A pointer to a rte_pci_device structure describing the device
+ * to use
+ * @param buf
+ * A data buffer where the bytes should be read into
+ * @param len
+ * The length of the data buffer.
+ * @param offset
+ * The offset into PCI config space
+ * @return
+ * Number of bytes read on success, negative on error.
+ */
+__rte_internal
+int rte_pci_read_config(const struct rte_pci_device *device,
+ void *buf, size_t len, off_t offset);
+
+/**
+ * Write PCI config space.
+ *
+ * @param device
+ * A pointer to a rte_pci_device structure describing the device
+ * to use
+ * @param buf
+ * A data buffer containing the bytes should be written
+ * @param len
+ * The length of the data buffer.
+ * @param offset
+ * The offset into PCI config space
+ */
+__rte_internal
+int rte_pci_write_config(const struct rte_pci_device *device,
+ const void *buf, size_t len, off_t offset);
+
+/**
+ * A structure used to access io resources for a pci device.
+ * rte_pci_ioport is arch, os, driver specific, and should not be used outside
+ * of pci ioport api.
+ */
+struct rte_pci_ioport {
+ struct rte_pci_device *dev;
+ uint64_t base;
+ uint64_t len; /* only filled for memory mapped ports */
+};
+
+/**
+ * Initialize a rte_pci_ioport object for a pci device io resource.
+ *
+ * This object is then used to gain access to those io resources (see below).
+ *
+ * @param dev
+ * A pointer to a rte_pci_device structure describing the device
+ * to use.
+ * @param bar
+ * Index of the io pci resource we want to access.
+ * @param p
+ * The rte_pci_ioport object to be initialized.
+ * @return
+ * 0 on success, negative on error.
+ */
+__rte_internal
+int rte_pci_ioport_map(struct rte_pci_device *dev, int bar,
+ struct rte_pci_ioport *p);
+
+/**
+ * Release any resources used in a rte_pci_ioport object.
+ *
+ * @param p
+ * The rte_pci_ioport object to be uninitialized.
+ * @return
+ * 0 on success, negative on error.
+ */
+__rte_internal
+int rte_pci_ioport_unmap(struct rte_pci_ioport *p);
+
+/**
+ * Read from a io pci resource.
+ *
+ * @param p
+ * The rte_pci_ioport object from which we want to read.
+ * @param data
+ * A data buffer where the bytes should be read into
+ * @param len
+ * The length of the data buffer.
+ * @param offset
+ * The offset into the pci io resource.
+ */
+__rte_internal
+void rte_pci_ioport_read(struct rte_pci_ioport *p,
+ void *data, size_t len, off_t offset);
+
+/**
+ * Write to a io pci resource.
+ *
+ * @param p
+ * The rte_pci_ioport object to which we want to write.
+ * @param data
+ * A data buffer where the bytes should be read into
+ * @param len
+ * The length of the data buffer.
+ * @param offset
+ * The offset into the pci io resource.
+ */
+__rte_internal
+void rte_pci_ioport_write(struct rte_pci_ioport *p,
+ const void *data, size_t len, off_t offset);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _PCI_DRIVER_H_ */
diff --git a/drivers/bus/pci/pci_params.c b/drivers/bus/pci/pci_params.c
index 691b5ea018..3f4f94cc72 100644
--- a/drivers/bus/pci/pci_params.c
+++ b/drivers/bus/pci/pci_params.c
@@ -3,7 +3,6 @@
*/
#include <rte_bus.h>
-#include <rte_bus_pci.h>
#include <rte_dev.h>
#include <rte_errno.h>
#include <rte_kvargs.h>
diff --git a/drivers/bus/pci/private.h b/drivers/bus/pci/private.h
index 0fbef8e1d8..dd42baf516 100644
--- a/drivers/bus/pci/private.h
+++ b/drivers/bus/pci/private.h
@@ -8,10 +8,11 @@
#include <stdbool.h>
#include <stdio.h>
-#include <rte_bus_pci.h>
#include <rte_os_shim.h>
#include <rte_pci.h>
+#include "pci_driver.h"
+
extern struct rte_pci_bus rte_pci_bus;
struct rte_pci_driver;
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 21d9dd4289..2fb35801f8 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -1,225 +1,17 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2015 Intel Corporation.
- * Copyright 2013-2014 6WIND S.A.
+ * Copyright(c) 2021 Intel Corporation.
*/
#ifndef _RTE_BUS_PCI_H_
#define _RTE_BUS_PCI_H_
-/**
- * @file
- * PCI device & driver interface
- */
-
#ifdef __cplusplus
extern "C" {
#endif
-#include <stdio.h>
-#include <stdlib.h>
-#include <limits.h>
-#include <errno.h>
-#include <sys/queue.h>
#include <stdint.h>
-#include <inttypes.h>
-
-#include <rte_debug.h>
-#include <rte_interrupts.h>
-#include <rte_dev.h>
-#include <rte_bus.h>
-#include <rte_pci.h>
-
-/** Pathname of PCI devices directory. */
-const char *rte_pci_get_sysfs_path(void);
-
-/* Forward declarations */
-struct rte_pci_device;
-struct rte_pci_driver;
-
-/** List of PCI devices */
-TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
-/** List of PCI drivers */
-TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
-
-/* PCI Bus iterators */
-#define FOREACH_DEVICE_ON_PCIBUS(p) \
- TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
-#define FOREACH_DRIVER_ON_PCIBUS(p) \
- TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
-
-struct rte_devargs;
-
-enum rte_pci_kernel_driver {
- RTE_PCI_KDRV_UNKNOWN = 0, /* may be misc UIO or bifurcated driver */
- RTE_PCI_KDRV_IGB_UIO, /* igb_uio for Linux */
- RTE_PCI_KDRV_VFIO, /* VFIO for Linux */
- RTE_PCI_KDRV_UIO_GENERIC, /* uio_pci_generic for Linux */
- RTE_PCI_KDRV_NIC_UIO, /* nic_uio for FreeBSD */
- RTE_PCI_KDRV_NONE, /* no attached driver */
- RTE_PCI_KDRV_NET_UIO, /* NetUIO for Windows */
-};
-
-/**
- * A structure describing a PCI device.
- */
-struct rte_pci_device {
- TAILQ_ENTRY(rte_pci_device) next; /**< Next probed PCI device. */
- struct rte_device device; /**< Inherit core device */
- struct rte_pci_addr addr; /**< PCI location. */
- struct rte_pci_id id; /**< PCI ID. */
- struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
- /**< PCI Memory Resource */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
- struct rte_pci_driver *driver; /**< PCI driver used in probing */
- uint16_t max_vfs; /**< sriov enable if not zero */
- enum rte_pci_kernel_driver kdrv; /**< Kernel driver passthrough */
- char name[PCI_PRI_STR_SIZE+1]; /**< PCI location (ASCII) */
- struct rte_intr_handle vfio_req_intr_handle;
- /**< Handler of VFIO request interrupt */
-};
-
-/**
- * @internal
- * Helper macro for drivers that need to convert to struct rte_pci_device.
- */
-#define RTE_DEV_TO_PCI(ptr) container_of(ptr, struct rte_pci_device, device)
-
-#define RTE_DEV_TO_PCI_CONST(ptr) \
- container_of(ptr, const struct rte_pci_device, device)
-
-#define RTE_ETH_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
-
-#ifdef __cplusplus
-/** C++ macro used to help building up tables of device IDs */
-#define RTE_PCI_DEVICE(vend, dev) \
- RTE_CLASS_ANY_ID, \
- (vend), \
- (dev), \
- RTE_PCI_ANY_ID, \
- RTE_PCI_ANY_ID
-#else
-/** Macro used to help building up tables of device IDs */
-#define RTE_PCI_DEVICE(vend, dev) \
- .class_id = RTE_CLASS_ANY_ID, \
- .vendor_id = (vend), \
- .device_id = (dev), \
- .subsystem_vendor_id = RTE_PCI_ANY_ID, \
- .subsystem_device_id = RTE_PCI_ANY_ID
-#endif
-
-/**
- * Initialisation function for the driver called during PCI probing.
- */
-typedef int (rte_pci_probe_t)(struct rte_pci_driver *, struct rte_pci_device *);
-
-/**
- * Uninitialisation function for the driver called during hotplugging.
- */
-typedef int (rte_pci_remove_t)(struct rte_pci_device *);
-
-/**
- * Driver-specific DMA mapping. After a successful call the device
- * will be able to read/write from/to this segment.
- *
- * @param dev
- * Pointer to the PCI device.
- * @param addr
- * Starting virtual address of memory to be mapped.
- * @param iova
- * Starting IOVA address of memory to be mapped.
- * @param len
- * Length of memory segment being mapped.
- * @return
- * - 0 On success.
- * - Negative value and rte_errno is set otherwise.
- */
-typedef int (pci_dma_map_t)(struct rte_pci_device *dev, void *addr,
- uint64_t iova, size_t len);
-
-/**
- * Driver-specific DMA un-mapping. After a successful call the device
- * will not be able to read/write from/to this segment.
- *
- * @param dev
- * Pointer to the PCI device.
- * @param addr
- * Starting virtual address of memory to be unmapped.
- * @param iova
- * Starting IOVA address of memory to be unmapped.
- * @param len
- * Length of memory segment being unmapped.
- * @return
- * - 0 On success.
- * - Negative value and rte_errno is set otherwise.
- */
-typedef int (pci_dma_unmap_t)(struct rte_pci_device *dev, void *addr,
- uint64_t iova, size_t len);
-
-/**
- * A structure describing a PCI driver.
- */
-struct rte_pci_driver {
- TAILQ_ENTRY(rte_pci_driver) next; /**< Next in list. */
- struct rte_driver driver; /**< Inherit core driver. */
- struct rte_pci_bus *bus; /**< PCI bus reference. */
- rte_pci_probe_t *probe; /**< Device probe function. */
- rte_pci_remove_t *remove; /**< Device remove function. */
- pci_dma_map_t *dma_map; /**< device dma map function. */
- pci_dma_unmap_t *dma_unmap; /**< device dma unmap function. */
- const struct rte_pci_id *id_table; /**< ID table, NULL terminated. */
- uint32_t drv_flags; /**< Flags RTE_PCI_DRV_*. */
-};
-
-/**
- * Structure describing the PCI bus
- */
-struct rte_pci_bus {
- struct rte_bus bus; /**< Inherit the generic class */
- struct rte_pci_device_list device_list; /**< List of PCI devices */
- struct rte_pci_driver_list driver_list; /**< List of PCI drivers */
-};
-
-/** Device needs PCI BAR mapping (done with either IGB_UIO or VFIO) */
-#define RTE_PCI_DRV_NEED_MAPPING 0x0001
-/** Device needs PCI BAR mapping with enabled write combining (wc) */
-#define RTE_PCI_DRV_WC_ACTIVATE 0x0002
-/** Device already probed can be probed again to check for new ports. */
-#define RTE_PCI_DRV_PROBE_AGAIN 0x0004
-/** Device driver supports link state interrupt */
-#define RTE_PCI_DRV_INTR_LSC 0x0008
-/** Device driver supports device removal interrupt */
-#define RTE_PCI_DRV_INTR_RMV 0x0010
-/** Device driver needs to keep mapped resources if unsupported dev detected */
-#define RTE_PCI_DRV_KEEP_MAPPED_RES 0x0020
-/** Device driver needs IOVA as VA and cannot work with IOVA as PA */
-#define RTE_PCI_DRV_NEED_IOVA_AS_VA 0x0040
-
-/**
- * Map the PCI device resources in user space virtual memory address
- *
- * Note that driver should not call this function when flag
- * RTE_PCI_DRV_NEED_MAPPING is set, as EAL will do that for
- * you when it's on.
- *
- * @param dev
- * A pointer to a rte_pci_device structure describing the device
- * to use
- *
- * @return
- * 0 on success, negative on error and positive if no driver
- * is found for the device.
- */
-int rte_pci_map_device(struct rte_pci_device *dev);
-
-/**
- * Unmap this device
- *
- * @param dev
- * A pointer to a rte_pci_device structure describing the device
- * to use
- */
-void rte_pci_unmap_device(struct rte_pci_device *dev);
+#include <rte_compat.h>
/**
* Dump the content of the PCI bus.
@@ -229,169 +21,6 @@ void rte_pci_unmap_device(struct rte_pci_device *dev);
*/
void rte_pci_dump(FILE *f);
-/**
- * Find device's extended PCI capability.
- *
- * @param dev
- * A pointer to rte_pci_device structure.
- *
- * @param cap
- * Extended capability to be found, which can be any from
- * RTE_PCI_EXT_CAP_ID_*, defined in librte_pci.
- *
- * @return
- * > 0: The offset of the next matching extended capability structure
- * within the device's PCI configuration space.
- * < 0: An error in PCI config space read.
- * = 0: Device does not support it.
- */
-__rte_experimental
-off_t rte_pci_find_ext_capability(struct rte_pci_device *dev, uint32_t cap);
-
-/**
- * Enables/Disables Bus Master for device's PCI command register.
- *
- * @param dev
- * A pointer to rte_pci_device structure.
- * @param enable
- * Enable or disable Bus Master.
- *
- * @return
- * 0 on success, -1 on error in PCI config space read/write.
- */
-__rte_experimental
-int rte_pci_set_bus_master(struct rte_pci_device *dev, bool enable);
-
-/**
- * Register a PCI driver.
- *
- * @param driver
- * A pointer to a rte_pci_driver structure describing the driver
- * to be registered.
- */
-void rte_pci_register(struct rte_pci_driver *driver);
-
-/** Helper for PCI device registration from driver (eth, crypto) instance */
-#define RTE_PMD_REGISTER_PCI(nm, pci_drv) \
-RTE_INIT(pciinitfn_ ##nm) \
-{\
- (pci_drv).driver.name = RTE_STR(nm);\
- rte_pci_register(&pci_drv); \
-} \
-RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
-
-/**
- * Unregister a PCI driver.
- *
- * @param driver
- * A pointer to a rte_pci_driver structure describing the driver
- * to be unregistered.
- */
-void rte_pci_unregister(struct rte_pci_driver *driver);
-
-/**
- * Read PCI config space.
- *
- * @param device
- * A pointer to a rte_pci_device structure describing the device
- * to use
- * @param buf
- * A data buffer where the bytes should be read into
- * @param len
- * The length of the data buffer.
- * @param offset
- * The offset into PCI config space
- * @return
- * Number of bytes read on success, negative on error.
- */
-int rte_pci_read_config(const struct rte_pci_device *device,
- void *buf, size_t len, off_t offset);
-
-/**
- * Write PCI config space.
- *
- * @param device
- * A pointer to a rte_pci_device structure describing the device
- * to use
- * @param buf
- * A data buffer containing the bytes should be written
- * @param len
- * The length of the data buffer.
- * @param offset
- * The offset into PCI config space
- */
-int rte_pci_write_config(const struct rte_pci_device *device,
- const void *buf, size_t len, off_t offset);
-
-/**
- * A structure used to access io resources for a pci device.
- * rte_pci_ioport is arch, os, driver specific, and should not be used outside
- * of pci ioport api.
- */
-struct rte_pci_ioport {
- struct rte_pci_device *dev;
- uint64_t base;
- uint64_t len; /* only filled for memory mapped ports */
-};
-
-/**
- * Initialize a rte_pci_ioport object for a pci device io resource.
- *
- * This object is then used to gain access to those io resources (see below).
- *
- * @param dev
- * A pointer to a rte_pci_device structure describing the device
- * to use.
- * @param bar
- * Index of the io pci resource we want to access.
- * @param p
- * The rte_pci_ioport object to be initialized.
- * @return
- * 0 on success, negative on error.
- */
-int rte_pci_ioport_map(struct rte_pci_device *dev, int bar,
- struct rte_pci_ioport *p);
-
-/**
- * Release any resources used in a rte_pci_ioport object.
- *
- * @param p
- * The rte_pci_ioport object to be uninitialized.
- * @return
- * 0 on success, negative on error.
- */
-int rte_pci_ioport_unmap(struct rte_pci_ioport *p);
-
-/**
- * Read from a io pci resource.
- *
- * @param p
- * The rte_pci_ioport object from which we want to read.
- * @param data
- * A data buffer where the bytes should be read into
- * @param len
- * The length of the data buffer.
- * @param offset
- * The offset into the pci io resource.
- */
-void rte_pci_ioport_read(struct rte_pci_ioport *p,
- void *data, size_t len, off_t offset);
-
-/**
- * Write to a io pci resource.
- *
- * @param p
- * The rte_pci_ioport object to which we want to write.
- * @param data
- * A data buffer where the bytes should be read into
- * @param len
- * The length of the data buffer.
- * @param offset
- * The offset into the pci io resource.
- */
-void rte_pci_ioport_write(struct rte_pci_ioport *p,
- const void *data, size_t len, off_t offset);
-
/**
* Read 4 bytes from PCI memory resource.
*
diff --git a/drivers/bus/pci/version.map b/drivers/bus/pci/version.map
index 01ec836559..73cf881778 100644
--- a/drivers/bus/pci/version.map
+++ b/drivers/bus/pci/version.map
@@ -2,6 +2,22 @@ DPDK_22 {
global:
rte_pci_dump;
+
+ local: *;
+};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 21.11
+ rte_pci_mem_rd32;
+ rte_pci_mem_wr32;
+};
+
+INTERNAL {
+ global:
+
+ rte_pci_find_ext_capability;
rte_pci_get_sysfs_path;
rte_pci_ioport_map;
rte_pci_ioport_read;
@@ -10,22 +26,8 @@ DPDK_22 {
rte_pci_map_device;
rte_pci_read_config;
rte_pci_register;
+ rte_pci_set_bus_master;
rte_pci_unmap_device;
rte_pci_unregister;
rte_pci_write_config;
-
- local: *;
-};
-
-EXPERIMENTAL {
- global:
-
- rte_pci_find_ext_capability;
-
- # added in 21.08
- rte_pci_set_bus_master;
-
- # added in 21.11
- rte_pci_mem_rd32;
- rte_pci_mem_wr32;
};
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 285b24b82d..fa8a6ef0af 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -7,7 +7,7 @@
#include <rte_alarm.h>
#include <rte_bitmap.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_byteorder.h>
#include <rte_common.h>
#include <rte_cycles.h>
diff --git a/drivers/common/mlx5/linux/mlx5_common_verbs.c b/drivers/common/mlx5/linux/mlx5_common_verbs.c
index 9080bd3e87..65d795643b 100644
--- a/drivers/common/mlx5/linux/mlx5_common_verbs.c
+++ b/drivers/common/mlx5/linux/mlx5_common_verbs.c
@@ -11,7 +11,7 @@
#include <inttypes.h>
#include <rte_errno.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_bus_auxiliary.h>
#include "mlx5_common_utils.h"
diff --git a/drivers/common/mlx5/mlx5_common_pci.c b/drivers/common/mlx5/mlx5_common_pci.c
index 8b38091d87..eaa9a0fff5 100644
--- a/drivers/common/mlx5/mlx5_common_pci.c
+++ b/drivers/common/mlx5/mlx5_common_pci.c
@@ -9,7 +9,7 @@
#include <rte_errno.h>
#include <rte_class.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include "mlx5_common_log.h"
#include "mlx5_common_private.h"
diff --git a/drivers/common/octeontx2/otx2_dev.h b/drivers/common/octeontx2/otx2_dev.h
index d5b2b0d9af..1f36f24cb0 100644
--- a/drivers/common/octeontx2/otx2_dev.h
+++ b/drivers/common/octeontx2/otx2_dev.h
@@ -5,7 +5,7 @@
#ifndef _OTX2_DEV_H
#define _OTX2_DEV_H
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include "otx2_common.h"
#include "otx2_irq.h"
diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c
index 6e9643c383..a73c27c970 100644
--- a/drivers/common/octeontx2/otx2_sec_idev.c
+++ b/drivers/common/octeontx2/otx2_sec_idev.c
@@ -3,7 +3,7 @@
*/
#include <rte_atomic.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_ethdev.h>
#include <rte_spinlock.h>
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 228c057d1e..8d18365bae 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -4,7 +4,7 @@
#ifndef _QAT_DEVICE_H_
#define _QAT_DEVICE_H_
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include "qat_common.h"
#include "qat_logs.h"
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 026ea5ee01..a964231d2c 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -8,7 +8,7 @@
#include <rte_malloc.h>
#include <rte_memzone.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_atomic.h>
#include <rte_prefetch.h>
diff --git a/drivers/common/sfc_efx/sfc_efx.h b/drivers/common/sfc_efx/sfc_efx.h
index c16eca60f3..9829ed0e96 100644
--- a/drivers/common/sfc_efx/sfc_efx.h
+++ b/drivers/common/sfc_efx/sfc_efx.h
@@ -10,7 +10,7 @@
#ifndef _SFC_EFX_H_
#define _SFC_EFX_H_
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include "efx.h"
#include "efsys.h"
diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c
index 883e720ec1..e8112935c3 100644
--- a/drivers/compress/mlx5/mlx5_compress.c
+++ b/drivers/compress/mlx5/mlx5_compress.c
@@ -5,7 +5,7 @@
#include <rte_malloc.h>
#include <rte_log.h>
#include <rte_errno.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_spinlock.h>
#include <rte_comp.h>
#include <rte_compressdev.h>
diff --git a/drivers/compress/octeontx/otx_zip.h b/drivers/compress/octeontx/otx_zip.h
index e43f7f5c3e..56136ce734 100644
--- a/drivers/compress/octeontx/otx_zip.h
+++ b/drivers/compress/octeontx/otx_zip.h
@@ -7,7 +7,7 @@
#include <unistd.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_comp.h>
#include <rte_compressdev.h>
#include <rte_compressdev_pmd.h>
diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index 7ac25a3b4c..6fa82ad9a8 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -6,7 +6,7 @@
#include <rte_mbuf.h>
#include <rte_hexdump.h>
#include <rte_comp.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_byteorder.h>
#include <rte_memcpy.h>
#include <rte_common.h>
diff --git a/drivers/crypto/ccp/ccp_dev.h b/drivers/crypto/ccp/ccp_dev.h
index ca5145c278..eb8cd94716 100644
--- a/drivers/crypto/ccp/ccp_dev.h
+++ b/drivers/crypto/ccp/ccp_dev.h
@@ -10,7 +10,7 @@
#include <stdint.h>
#include <string.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_atomic.h>
#include <rte_byteorder.h>
#include <rte_io.h>
diff --git a/drivers/crypto/ccp/ccp_pci.h b/drivers/crypto/ccp/ccp_pci.h
index 7ed3bac406..c8ba9f8789 100644
--- a/drivers/crypto/ccp/ccp_pci.h
+++ b/drivers/crypto/ccp/ccp_pci.h
@@ -7,7 +7,7 @@
#include <stdint.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#define SYSFS_PCI_DEVICES "/sys/bus/pci/devices"
#define PROC_MODULES "/proc/modules"
diff --git a/drivers/crypto/ccp/rte_ccp_pmd.c b/drivers/crypto/ccp/rte_ccp_pmd.c
index ab9416942e..8f5351511e 100644
--- a/drivers/crypto/ccp/rte_ccp_pmd.c
+++ b/drivers/crypto/ccp/rte_ccp_pmd.c
@@ -3,7 +3,7 @@
*/
#include <rte_string_fns.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_bus_vdev.h>
#include <rte_common.h>
#include <rte_cryptodev.h>
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev.c b/drivers/crypto/cnxk/cn10k_cryptodev.c
index db7b5aa7c6..ed96f98e51 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev.c
@@ -2,7 +2,7 @@
* Copyright(C) 2021 Marvell.
*/
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_common.h>
#include <rte_crypto.h>
#include <rte_cryptodev.h>
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev.c b/drivers/crypto/cnxk/cn9k_cryptodev.c
index 9ff2383d98..017febefa9 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev.c
@@ -2,7 +2,7 @@
* Copyright(C) 2021 Marvell.
*/
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_common.h>
#include <rte_crypto.h>
#include <rte_cryptodev.h>
diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c
index b3d5200ca3..6518ed0c19 100644
--- a/drivers/crypto/mlx5/mlx5_crypto.c
+++ b/drivers/crypto/mlx5/mlx5_crypto.c
@@ -6,7 +6,7 @@
#include <rte_mempool.h>
#include <rte_errno.h>
#include <rte_log.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_memory.h>
#include <mlx5_glue.h>
diff --git a/drivers/crypto/nitrox/nitrox_device.h b/drivers/crypto/nitrox/nitrox_device.h
index 6b8095f42b..91b15d9913 100644
--- a/drivers/crypto/nitrox/nitrox_device.h
+++ b/drivers/crypto/nitrox/nitrox_device.h
@@ -5,7 +5,7 @@
#ifndef _NITROX_DEVICE_H_
#define _NITROX_DEVICE_H_
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_cryptodev.h>
struct nitrox_sym_device;
diff --git a/drivers/crypto/octeontx/otx_cryptodev.c b/drivers/crypto/octeontx/otx_cryptodev.c
index 3822c0d779..0b86757fc1 100644
--- a/drivers/crypto/octeontx/otx_cryptodev.c
+++ b/drivers/crypto/octeontx/otx_cryptodev.c
@@ -2,7 +2,7 @@
* Copyright(c) 2018 Cavium, Inc
*/
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_common.h>
#include <rte_cryptodev.h>
#include <rte_cryptodev_pmd.h>
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
index eac6796cfb..212388da96 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
@@ -3,7 +3,7 @@
*/
#include <rte_alarm.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_cryptodev.h>
#include <rte_cryptodev_pmd.h>
#include <rte_eventdev.h>
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev.c b/drivers/crypto/octeontx2/otx2_cryptodev.c
index 75fb4f9a3b..d0451e25b4 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev.c
@@ -2,7 +2,7 @@
* Copyright (C) 2019 Marvell International Ltd.
*/
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_common.h>
#include <rte_crypto.h>
#include <rte_cryptodev.h>
diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c
index 93b257522b..574e3d577d 100644
--- a/drivers/crypto/qat/qat_sym.c
+++ b/drivers/crypto/qat/qat_sym.c
@@ -7,7 +7,7 @@
#include <rte_mempool.h>
#include <rte_mbuf.h>
#include <rte_crypto_sym.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_byteorder.h>
#include "qat_sym.h"
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index 6868e5f001..76c58346d1 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -2,7 +2,7 @@
* Copyright(c) 2015-2018 Intel Corporation
*/
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_common.h>
#include <rte_dev.h>
#include <rte_malloc.h>
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 4bae74a487..706fa59629 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -7,7 +7,7 @@
#include <rte_common.h>
#include <rte_errno.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_cryptodev.h>
#include <rte_cryptodev_pmd.h>
#include <rte_eal.h>
diff --git a/drivers/crypto/virtio/virtio_pci.h b/drivers/crypto/virtio/virtio_pci.h
index 0a7ea1bb64..889c263555 100644
--- a/drivers/crypto/virtio/virtio_pci.h
+++ b/drivers/crypto/virtio/virtio_pci.h
@@ -9,7 +9,7 @@
#include <rte_eal_paging.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_cryptodev.h>
#include "virtio_crypto.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb2/pf/dlb2_main.h
index 9eeda482a3..40178e0dda 100644
--- a/drivers/event/dlb2/pf/dlb2_main.h
+++ b/drivers/event/dlb2/pf/dlb2_main.h
@@ -9,7 +9,7 @@
#include <rte_log.h>
#include <rte_spinlock.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_eal_paging.h>
#include "base/dlb2_hw_types.h"
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index e9da89d650..8f8c742286 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -25,7 +25,7 @@
#include <rte_cycles.h>
#include <rte_io.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_eventdev.h>
#include <eventdev_pmd.h>
#include <eventdev_pmd_pci.h>
diff --git a/drivers/event/octeontx/ssovf_probe.c b/drivers/event/octeontx/ssovf_probe.c
index 4da7d1ae45..f45f005e33 100644
--- a/drivers/event/octeontx/ssovf_probe.c
+++ b/drivers/event/octeontx/ssovf_probe.c
@@ -7,7 +7,7 @@
#include <rte_eal.h>
#include <rte_io.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include "octeontx_mbox.h"
#include "ssovf_evdev.h"
diff --git a/drivers/event/octeontx/timvf_probe.c b/drivers/event/octeontx/timvf_probe.c
index 59bba31e8e..5a75494b12 100644
--- a/drivers/event/octeontx/timvf_probe.c
+++ b/drivers/event/octeontx/timvf_probe.c
@@ -5,7 +5,7 @@
#include <rte_eal.h>
#include <rte_io.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <octeontx_mbox.h>
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 38a6b651d9..5db1880455 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -4,7 +4,7 @@
#include <inttypes.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_common.h>
#include <rte_eal.h>
#include <eventdev_pmd_pci.h>
diff --git a/drivers/mempool/cnxk/cnxk_mempool.c b/drivers/mempool/cnxk/cnxk_mempool.c
index dd4d74ca05..175bad355f 100644
--- a/drivers/mempool/cnxk/cnxk_mempool.c
+++ b/drivers/mempool/cnxk/cnxk_mempool.c
@@ -3,7 +3,7 @@
*/
#include <rte_atomic.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_common.h>
#include <rte_devargs.h>
#include <rte_eal.h>
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c
index 94dc5cd815..a7a09f7872 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.c
+++ b/drivers/mempool/octeontx/octeontx_fpavf.c
@@ -13,7 +13,7 @@
#include <rte_atomic.h>
#include <rte_eal.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_errno.h>
#include <rte_memory.h>
#include <rte_malloc.h>
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
index fb630fecf8..6c3969fcfb 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ b/drivers/mempool/octeontx2/otx2_mempool.c
@@ -3,7 +3,7 @@
*/
#include <rte_atomic.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_common.h>
#include <rte_eal.h>
#include <rte_io.h>
diff --git a/drivers/mempool/octeontx2/otx2_mempool.h b/drivers/mempool/octeontx2/otx2_mempool.h
index 8aa548248d..b2ed9a56e8 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.h
+++ b/drivers/mempool/octeontx2/otx2_mempool.h
@@ -6,7 +6,7 @@
#define __OTX2_MEMPOOL_H__
#include <rte_bitmap.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_devargs.h>
#include <rte_mempool.h>
diff --git a/drivers/mempool/octeontx2/otx2_mempool_irq.c b/drivers/mempool/octeontx2/otx2_mempool_irq.c
index 5fa22b9612..c7c9b0d35d 100644
--- a/drivers/mempool/octeontx2/otx2_mempool_irq.c
+++ b/drivers/mempool/octeontx2/otx2_mempool_irq.c
@@ -5,7 +5,7 @@
#include <inttypes.h>
#include <rte_common.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include "otx2_common.h"
#include "otx2_irq.h"
diff --git a/drivers/meson.build b/drivers/meson.build
index d9e331ec85..44fe109892 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -85,6 +85,7 @@ foreach subpath:subdirs
name = drv
sources = []
headers = []
+ driver_sdk_headers = [] # public headers included by drivers
objs = []
cflags = default_cflags
includes = [include_directories(drv_path)]
@@ -150,6 +151,9 @@ foreach subpath:subdirs
dpdk_extra_ldflags += pkgconfig_extra_libs
install_headers(headers)
+ if get_option('enable_driver_sdk')
+ install_headers(driver_sdk_headers)
+ endif
# generate pmdinfo sources by building a temporary
# lib and then running pmdinfogen on the contents of
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index 377299b14c..4c564ce21e 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -6,7 +6,7 @@
#include <sys/stat.h>
#include <dlfcn.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <ethdev_pci.h>
#include <rte_kvargs.h>
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff..87e29519e6 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -16,7 +16,7 @@
#include <rte_atomic.h>
#include <rte_branch_prediction.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_ether.h>
#include <rte_common.h>
#include <rte_cycles.h>
diff --git a/drivers/net/bnx2x/bnx2x.h b/drivers/net/bnx2x/bnx2x.h
index 80d19cbfd6..425acaa5ac 100644
--- a/drivers/net/bnx2x/bnx2x.h
+++ b/drivers/net/bnx2x/bnx2x.h
@@ -16,7 +16,7 @@
#include <rte_byteorder.h>
#include <rte_spinlock.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_io.h>
#include "bnx2x_osal.h"
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 494a1eff37..6a456e20e2 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -11,7 +11,7 @@
#include <sys/queue.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <ethdev_driver.h>
#include <rte_memory.h>
#include <rte_lcore.h>
diff --git a/drivers/net/bonding/rte_eth_bond_args.c b/drivers/net/bonding/rte_eth_bond_args.c
index 5406e1c934..24ff649ca7 100644
--- a/drivers/net/bonding/rte_eth_bond_args.c
+++ b/drivers/net/bonding/rte_eth_bond_args.c
@@ -4,7 +4,7 @@
#include <rte_devargs.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_kvargs.h>
#include "rte_eth_bond.h"
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index 01a2a9d147..80809d7cd6 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -8,7 +8,7 @@
#ifndef __T4_ADAPTER_H__
#define __T4_ADAPTER_H__
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_mbuf.h>
#include <rte_io.h>
#include <rte_rwlock.h>
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 177eca3976..fb03d69d66 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -20,7 +20,7 @@
#include <rte_log.h>
#include <rte_debug.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_branch_prediction.h>
#include <rte_memory.h>
#include <rte_tailq.h>
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b02..657de61aa4 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -13,7 +13,7 @@
#include <rte_byteorder.h>
#include <rte_debug.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
#include <ethdev_pci.h>
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd00..c75be0d1e5 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -18,7 +18,7 @@
#include <rte_log.h>
#include <rte_debug.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_memory.h>
#include <rte_memcpy.h>
#include <rte_memzone.h>
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 10ee0f3341..94181da21a 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -15,7 +15,7 @@
#include <rte_log.h>
#include <rte_debug.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
#include <ethdev_pci.h>
diff --git a/drivers/net/e1000/igb_pf.c b/drivers/net/e1000/igb_pf.c
index 2ce74dd5a9..294fb75dc3 100644
--- a/drivers/net/e1000/igb_pf.c
+++ b/drivers/net/e1000/igb_pf.c
@@ -10,7 +10,7 @@
#include <stdarg.h>
#include <inttypes.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_interrupts.h>
#include <rte_log.h>
#include <rte_debug.h>
diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h
index 06ac8b06b5..a3d76383e0 100644
--- a/drivers/net/ena/ena_ethdev.h
+++ b/drivers/net/ena/ena_ethdev.h
@@ -12,7 +12,7 @@
#include <ethdev_pci.h>
#include <rte_cycles.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_timer.h>
#include <rte_dev.h>
#include <rte_net.h>
diff --git a/drivers/net/enic/base/vnic_dev.h b/drivers/net/enic/base/vnic_dev.h
index 4b9f75b65f..db9b2af3cd 100644
--- a/drivers/net/enic/base/vnic_dev.h
+++ b/drivers/net/enic/base/vnic_dev.h
@@ -9,7 +9,7 @@
#include <stdbool.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include "enic_compat.h"
#include "vnic_resource.h"
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b..1202fcd4f9 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -8,7 +8,7 @@
#include <rte_dev.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <ethdev_driver.h>
#include <ethdev_pci.h>
#include <rte_geneve.h>
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6..f5c1131839 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -10,7 +10,7 @@
#include <fcntl.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_memzone.h>
#include <rte_malloc.h>
#include <rte_mbuf.h>
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
index 79dd6e5640..f44f6c013e 100644
--- a/drivers/net/enic/enic_vf_representor.c
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -5,7 +5,7 @@
#include <stdint.h>
#include <stdio.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_common.h>
#include <rte_dev.h>
#include <ethdev_driver.h>
diff --git a/drivers/net/hinic/base/hinic_pmd_hwdev.c b/drivers/net/hinic/base/hinic_pmd_hwdev.c
index cb9cf6efa2..49a6b523f5 100644
--- a/drivers/net/hinic/base/hinic_pmd_hwdev.c
+++ b/drivers/net/hinic/base/hinic_pmd_hwdev.c
@@ -3,7 +3,7 @@
*/
#include<ethdev_driver.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_hash.h>
#include <rte_jhash.h>
diff --git a/drivers/net/hinic/base/hinic_pmd_hwif.c b/drivers/net/hinic/base/hinic_pmd_hwif.c
index 26fa1e27d4..9b1b41ca95 100644
--- a/drivers/net/hinic/base/hinic_pmd_hwif.c
+++ b/drivers/net/hinic/base/hinic_pmd_hwif.c
@@ -2,7 +2,7 @@
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*/
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include "hinic_compat.h"
#include "hinic_csr.h"
diff --git a/drivers/net/hinic/base/hinic_pmd_nicio.c b/drivers/net/hinic/base/hinic_pmd_nicio.c
index ad5db9f1de..234b433c90 100644
--- a/drivers/net/hinic/base/hinic_pmd_nicio.c
+++ b/drivers/net/hinic/base/hinic_pmd_nicio.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*/
-#include<rte_bus_pci.h>
+#include <pci_driver.h>
#include "hinic_compat.h"
#include "hinic_pmd_hwdev.h"
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 1a72401546..d36dad0579 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -3,7 +3,7 @@
*/
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <ethdev_pci.h>
#include <rte_mbuf.h>
#include <rte_malloc.h>
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 7d37004972..d5d5bc8e0f 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -3,7 +3,7 @@
*/
#include <rte_alarm.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <ethdev_pci.h>
#include <rte_pci.h>
#include <rte_kvargs.h>
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 0f222b37f9..54cd15d40d 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -2,7 +2,7 @@
* Copyright(c) 2018-2021 HiSilicon Limited.
*/
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_common.h>
#include <rte_cycles.h>
#include <rte_geneve.h>
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed1..51fface85e 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -15,7 +15,7 @@
#include <rte_eal.h>
#include <rte_string_fns.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
#include <ethdev_pci.h>
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 0cfe13b7b2..356ca6de0c 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -18,7 +18,7 @@
#include <rte_log.h>
#include <rte_debug.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_atomic.h>
#include <rte_branch_prediction.h>
#include <rte_memory.h>
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 0481b55381..650d8ad502 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -2,7 +2,7 @@
* Copyright(c) 2018 Intel Corporation.
*/
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_ethdev.h>
#include <rte_pci.h>
#include <rte_malloc.h>
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 224a095483..3d3d0d6f3c 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -7,7 +7,7 @@
#include <rte_string_fns.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <ethdev_driver.h>
#include <ethdev_pci.h>
#include <rte_malloc.h>
diff --git a/drivers/net/ionic/ionic.h b/drivers/net/ionic/ionic.h
index 49b90d1b7c..e946d92c29 100644
--- a/drivers/net/ionic/ionic.h
+++ b/drivers/net/ionic/ionic.h
@@ -8,7 +8,7 @@
#include <stdint.h>
#include <inttypes.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include "ionic_dev.h"
#include "ionic_if.h"
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index e620793966..e8bcccccb4 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -3,7 +3,7 @@
*/
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_ethdev.h>
#include <ethdev_driver.h>
#include <rte_malloc.h>
diff --git a/drivers/net/ipn3ke/ipn3ke_ethdev.c b/drivers/net/ipn3ke/ipn3ke_ethdev.c
index 964506c6db..2493df2bcd 100644
--- a/drivers/net/ipn3ke/ipn3ke_ethdev.c
+++ b/drivers/net/ipn3ke/ipn3ke_ethdev.c
@@ -4,7 +4,7 @@
#include <stdint.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_ethdev.h>
#include <rte_pci.h>
#include <rte_malloc.h>
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 589d9fa587..f288bf09d8 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -5,7 +5,7 @@
#include <stdint.h>
#include <unistd.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_ethdev.h>
#include <rte_pci.h>
#include <rte_malloc.h>
diff --git a/drivers/net/ipn3ke/ipn3ke_tm.c b/drivers/net/ipn3ke/ipn3ke_tm.c
index 6a9b98fd7f..532d232dbf 100644
--- a/drivers/net/ipn3ke/ipn3ke_tm.c
+++ b/drivers/net/ipn3ke/ipn3ke_tm.c
@@ -6,7 +6,7 @@
#include <stdlib.h>
#include <string.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_ethdev.h>
#include <rte_pci.h>
#include <rte_malloc.h>
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b5371568b5..f873757a7b 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -20,7 +20,7 @@
#include <rte_log.h>
#include <rte_debug.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_branch_prediction.h>
#include <rte_memory.h>
#include <rte_kvargs.h>
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index a0ce18ca24..7b0c1ad542 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -19,7 +19,7 @@
#include <rte_time.h>
#include <rte_hash.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_tm_driver.h>
/* need update link, bit flag */
diff --git a/drivers/net/mlx4/mlx4_ethdev.c b/drivers/net/mlx4/mlx4_ethdev.c
index 783ff94dce..b86f65e32a 100644
--- a/drivers/net/mlx4/mlx4_ethdev.c
+++ b/drivers/net/mlx4/mlx4_ethdev.c
@@ -32,7 +32,7 @@
#pragma GCC diagnostic error "-Wpedantic"
#endif
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_errno.h>
#include <ethdev_driver.h>
#include <rte_ether.h>
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index f34133e2c6..2e0a40f0cc 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -25,7 +25,7 @@
#include <time.h>
#include <ethdev_driver.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_mbuf.h>
#include <rte_common.h>
#include <rte_interrupts.h>
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 5f8766aa48..ba184470e1 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -19,7 +19,7 @@
#include <ethdev_driver.h>
#include <ethdev_pci.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_bus_auxiliary.h>
#include <rte_common.h>
#include <rte_kvargs.h>
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index f84e061fe7..bbcdec0f49 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -13,7 +13,7 @@
#include <rte_malloc.h>
#include <ethdev_driver.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_common.h>
#include <rte_kvargs.h>
#include <rte_rwlock.h>
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 82e2284d98..8888702fc8 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -11,7 +11,7 @@
#include <errno.h>
#include <ethdev_driver.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_mbuf.h>
#include <rte_common.h>
#include <rte_interrupts.h>
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index eb4d34ca55..4d8c26f7bc 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -13,7 +13,7 @@
#include <rte_mbuf.h>
#include <rte_malloc.h>
#include <ethdev_driver.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_common.h>
#include <rte_eal_paging.h>
diff --git a/drivers/net/netvsc/hn_vf.c b/drivers/net/netvsc/hn_vf.c
index 75192e6319..1da4eacb6f 100644
--- a/drivers/net/netvsc/hn_vf.c
+++ b/drivers/net/netvsc/hn_vf.c
@@ -21,7 +21,7 @@
#include <rte_memory.h>
#include <rte_bus_vmbus.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_log.h>
#include <rte_string_fns.h>
#include <rte_alarm.h>
diff --git a/drivers/net/octeontx/base/octeontx_pkivf.c b/drivers/net/octeontx/base/octeontx_pkivf.c
index 0ddff54886..9ff5b78fd0 100644
--- a/drivers/net/octeontx/base/octeontx_pkivf.c
+++ b/drivers/net/octeontx/base/octeontx_pkivf.c
@@ -5,7 +5,7 @@
#include <string.h>
#include <rte_eal.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include "../octeontx_logs.h"
#include "octeontx_io.h"
diff --git a/drivers/net/octeontx/base/octeontx_pkovf.c b/drivers/net/octeontx/base/octeontx_pkovf.c
index bf28bc7992..a125acadce 100644
--- a/drivers/net/octeontx/base/octeontx_pkovf.c
+++ b/drivers/net/octeontx/base/octeontx_pkovf.c
@@ -10,7 +10,7 @@
#include <rte_cycles.h>
#include <rte_malloc.h>
#include <rte_memory.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_spinlock.h>
#include "../octeontx_logs.h"
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index b121488faf..40541c9280 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -4,7 +4,7 @@
#include <inttypes.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_malloc.h>
#include "otx2_ethdev.h"
diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index c5b5399282..10ec1e2fb7 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -21,7 +21,7 @@
#include <rte_ether.h>
#include <rte_io.h>
#include <rte_version.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
/* Forward declaration */
struct ecore_dev;
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index 331e06bac6..f57da8fc96 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -13,7 +13,7 @@
#include <stdbool.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <ethdev_driver.h>
#include <rte_kvargs.h>
#include <rte_spinlock.h>
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3..7f8b052845 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -11,7 +11,7 @@
#include <ethdev_driver.h>
#include <ethdev_pci.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_errno.h>
#include <rte_string_fns.h>
#include <rte_ether.h>
diff --git a/drivers/net/sfc/sfc_sriov.c b/drivers/net/sfc/sfc_sriov.c
index baa0242433..48c64c8311 100644
--- a/drivers/net/sfc/sfc_sriov.c
+++ b/drivers/net/sfc/sfc_sriov.c
@@ -8,7 +8,7 @@
*/
#include <rte_common.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include "sfc.h"
#include "sfc_log.h"
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index fc1844ddfc..cf87a3861f 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -32,7 +32,7 @@
#include <rte_malloc.h>
#include <rte_random.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_tailq.h>
#include <rte_devargs.h>
#include <rte_kvargs.h>
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965..faf0c58bd5 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -20,7 +20,7 @@
#include <rte_ethdev_core.h>
#include <rte_hash.h>
#include <rte_hash_crc.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_tm_driver.h>
/* need update link, bit flag */
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index eae400b141..f70a9309d9 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -4,7 +4,7 @@
*/
#include <sys/queue.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_malloc.h>
#include <rte_flow.h>
#include <rte_flow_driver.h>
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index 494d779a3c..58159bc226 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -20,7 +20,7 @@
#include <rte_memcpy.h>
#include <rte_malloc.h>
#include <rte_random.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include "base/txgbe.h"
#include "txgbe_ethdev.h"
diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
index 11e25a0142..a9016534aa 100644
--- a/drivers/net/virtio/virtio_pci.h
+++ b/drivers/net/virtio/virtio_pci.h
@@ -9,7 +9,7 @@
#include <stdbool.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <ethdev_driver.h>
#include "virtio.h"
diff --git a/drivers/net/virtio/virtio_pci_ethdev.c b/drivers/net/virtio/virtio_pci_ethdev.c
index 4083853c48..13dbc0c419 100644
--- a/drivers/net/virtio/virtio_pci_ethdev.c
+++ b/drivers/net/virtio/virtio_pci_ethdev.c
@@ -11,7 +11,7 @@
#include <ethdev_driver.h>
#include <ethdev_pci.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_errno.h>
#include <rte_memory.h>
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 1a3291273a..ccbab9364e 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -19,7 +19,7 @@
#include <rte_log.h>
#include <rte_debug.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_branch_prediction.h>
#include <rte_memory.h>
#include <rte_memzone.h>
diff --git a/drivers/raw/cnxk_bphy/cnxk_bphy.c b/drivers/raw/cnxk_bphy/cnxk_bphy.c
index 9cb3f8d332..61c60ed363 100644
--- a/drivers/raw/cnxk_bphy/cnxk_bphy.c
+++ b/drivers/raw/cnxk_bphy/cnxk_bphy.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright(C) 2021 Marvell.
*/
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_common.h>
#include <rte_dev.h>
#include <rte_eal.h>
diff --git a/drivers/raw/cnxk_bphy/cnxk_bphy_cgx.c b/drivers/raw/cnxk_bphy/cnxk_bphy_cgx.c
index ade45ab741..f9d7353a18 100644
--- a/drivers/raw/cnxk_bphy/cnxk_bphy_cgx.c
+++ b/drivers/raw/cnxk_bphy/cnxk_bphy_cgx.c
@@ -3,7 +3,7 @@
*/
#include <string.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_rawdev.h>
#include <rte_rawdev_pmd.h>
diff --git a/drivers/raw/cnxk_bphy/cnxk_bphy_irq.c b/drivers/raw/cnxk_bphy/cnxk_bphy_irq.c
index bbcc285a7a..6014992934 100644
--- a/drivers/raw/cnxk_bphy/cnxk_bphy_irq.c
+++ b/drivers/raw/cnxk_bphy/cnxk_bphy_irq.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright(C) 2021 Marvell.
*/
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_pci.h>
#include <rte_rawdev.h>
#include <rte_rawdev_pmd.h>
diff --git a/drivers/raw/cnxk_bphy/cnxk_bphy_irq.h b/drivers/raw/cnxk_bphy/cnxk_bphy_irq.h
index b55147b93e..7a623b48c8 100644
--- a/drivers/raw/cnxk_bphy/cnxk_bphy_irq.h
+++ b/drivers/raw/cnxk_bphy/cnxk_bphy_irq.h
@@ -5,7 +5,7 @@
#ifndef _CNXK_BPHY_IRQ_
#define _CNXK_BPHY_IRQ_
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_dev.h>
#include <roc_api.h>
diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c
index 76e6a8530b..eb0c6e271f 100644
--- a/drivers/raw/ifpga/ifpga_rawdev.c
+++ b/drivers/raw/ifpga/ifpga_rawdev.c
@@ -16,7 +16,7 @@
#include <rte_devargs.h>
#include <rte_memcpy.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_kvargs.h>
#include <rte_alarm.h>
#include <rte_interrupts.h>
diff --git a/drivers/raw/ifpga/rte_pmd_ifpga.c b/drivers/raw/ifpga/rte_pmd_ifpga.c
index 23146432c2..959aad414d 100644
--- a/drivers/raw/ifpga/rte_pmd_ifpga.c
+++ b/drivers/raw/ifpga/rte_pmd_ifpga.c
@@ -3,7 +3,7 @@
*/
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_rawdev.h>
#include <rte_rawdev_pmd.h>
#include "rte_pmd_ifpga.h"
diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 13515dbc6c..257e4c004a 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -2,7 +2,7 @@
* Copyright(c) 2020 Intel Corporation
*/
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_memzone.h>
#include <rte_devargs.h>
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 5396671d4f..f1d4220b69 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -3,7 +3,7 @@
*/
#include <rte_cycles.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_memzone.h>
#include <rte_string_fns.h>
#include <rte_rawdev_pmd.h>
diff --git a/drivers/raw/ntb/ntb.c b/drivers/raw/ntb/ntb.c
index 78cfcd79f7..a74a42f8d2 100644
--- a/drivers/raw/ntb/ntb.c
+++ b/drivers/raw/ntb/ntb.c
@@ -13,7 +13,7 @@
#include <rte_log.h>
#include <rte_pci.h>
#include <rte_mbuf.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_memzone.h>
#include <rte_memcpy.h>
#include <rte_rawdev.h>
diff --git a/drivers/raw/ntb/ntb_hw_intel.c b/drivers/raw/ntb/ntb_hw_intel.c
index a742e8fbb9..072c2dd7b9 100644
--- a/drivers/raw/ntb/ntb_hw_intel.c
+++ b/drivers/raw/ntb/ntb_hw_intel.c
@@ -8,7 +8,7 @@
#include <rte_io.h>
#include <rte_eal.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_rawdev.h>
#include <rte_rawdev_pmd.h>
diff --git a/drivers/raw/octeontx2_dma/otx2_dpi_rawdev.c b/drivers/raw/octeontx2_dma/otx2_dpi_rawdev.c
index 8c01f25ec7..179e3e6dbb 100644
--- a/drivers/raw/octeontx2_dma/otx2_dpi_rawdev.c
+++ b/drivers/raw/octeontx2_dma/otx2_dpi_rawdev.c
@@ -6,7 +6,7 @@
#include <unistd.h>
#include <rte_bus.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_common.h>
#include <rte_eal.h>
#include <rte_lcore.h>
diff --git a/drivers/raw/octeontx2_ep/otx2_ep_enqdeq.c b/drivers/raw/octeontx2_ep/otx2_ep_enqdeq.c
index d04e957d82..214fd83169 100644
--- a/drivers/raw/octeontx2_ep/otx2_ep_enqdeq.c
+++ b/drivers/raw/octeontx2_ep/otx2_ep_enqdeq.c
@@ -8,7 +8,7 @@
#include <fcntl.h>
#include <rte_bus.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_eal.h>
#include <rte_lcore.h>
#include <rte_mempool.h>
diff --git a/drivers/raw/octeontx2_ep/otx2_ep_rawdev.c b/drivers/raw/octeontx2_ep/otx2_ep_rawdev.c
index b2ccdda83e..560a8bbf03 100644
--- a/drivers/raw/octeontx2_ep/otx2_ep_rawdev.c
+++ b/drivers/raw/octeontx2_ep/otx2_ep_rawdev.c
@@ -5,7 +5,7 @@
#include <unistd.h>
#include <rte_bus.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_eal.h>
#include <rte_lcore.h>
#include <rte_mempool.h>
diff --git a/drivers/regex/mlx5/mlx5_regex.c b/drivers/regex/mlx5/mlx5_regex.c
index f17b6df47f..0926aec5bc 100644
--- a/drivers/regex/mlx5/mlx5_regex.c
+++ b/drivers/regex/mlx5/mlx5_regex.c
@@ -9,7 +9,7 @@
#include <rte_regexdev.h>
#include <rte_regexdev_core.h>
#include <rte_regexdev_driver.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <mlx5_common.h>
#include <mlx5_common_mr.h>
diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c
index 786718af53..7693171106 100644
--- a/drivers/regex/mlx5/mlx5_regex_fastpath.c
+++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c
@@ -10,7 +10,7 @@
#include <rte_malloc.h>
#include <rte_log.h>
#include <rte_errno.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_pci.h>
#include <rte_regexdev_driver.h>
#include <rte_mbuf.h>
diff --git a/drivers/vdpa/ifc/base/ifcvf_osdep.h b/drivers/vdpa/ifc/base/ifcvf_osdep.h
index 6aef25ea45..3f3eb46f3f 100644
--- a/drivers/vdpa/ifc/base/ifcvf_osdep.h
+++ b/drivers/vdpa/ifc/base/ifcvf_osdep.h
@@ -10,7 +10,7 @@
#include <rte_cycles.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_log.h>
#include <rte_io.h>
diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
index 1dc813d0a3..968dea5055 100644
--- a/drivers/vdpa/ifc/ifcvf_vdpa.c
+++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
@@ -14,7 +14,7 @@
#include <rte_eal_paging.h>
#include <rte_malloc.h>
#include <rte_memory.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_vhost.h>
#include <rte_vdpa.h>
#include <rte_vdpa_dev.h>
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
index 6d17d7a6f3..81c7e738f5 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
@@ -12,7 +12,7 @@
#include <rte_log.h>
#include <rte_errno.h>
#include <rte_string_fns.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <mlx5_glue.h>
#include <mlx5_common.h>
diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h
index 8edca82ce8..76779faa6c 100644
--- a/lib/ethdev/ethdev_pci.h
+++ b/lib/ethdev/ethdev_pci.h
@@ -8,7 +8,7 @@
#include <rte_malloc.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include <rte_config.h>
#include <ethdev_driver.h>
diff --git a/lib/eventdev/eventdev_pmd_pci.h b/lib/eventdev/eventdev_pmd_pci.h
index d14ea634b8..260362aacb 100644
--- a/lib/eventdev/eventdev_pmd_pci.h
+++ b/lib/eventdev/eventdev_pmd_pci.h
@@ -24,7 +24,7 @@ extern "C" {
#include <rte_eal.h>
#include <rte_lcore.h>
#include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
#include "eventdev_pmd.h"
--
2.17.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH 7/8] kni: replace unused variable definition with reserved bytes
@ 2021-09-10 2:24 3% ` Chenbo Xia
2021-09-10 2:24 2% ` [dpdk-dev] [PATCH 8/8] bus/pci: remove ABIs in PCI bus Chenbo Xia
1 sibling, 0 replies; 200+ results
From: Chenbo Xia @ 2021-09-10 2:24 UTC (permalink / raw)
To: dev; +Cc: Ferruh Yigit
PCI ID and address in structure rte_kni_conf are never used. And in
order not to break ABI, replace these variables with reserved bytes.
Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
---
lib/kni/rte_kni.h | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/lib/kni/rte_kni.h b/lib/kni/rte_kni.h
index b0eaf46104..2281abbf6a 100644
--- a/lib/kni/rte_kni.h
+++ b/lib/kni/rte_kni.h
@@ -17,7 +17,6 @@
* and burst transmit packets to KNI interfaces.
*/
-#include <rte_pci.h>
#include <rte_memory.h>
#include <rte_mempool.h>
#include <rte_ether.h>
@@ -66,8 +65,7 @@ struct rte_kni_conf {
uint32_t core_id; /* Core ID to bind kernel thread on */
uint16_t group_id; /* Group ID */
unsigned mbuf_size; /* mbuf size */
- struct rte_pci_addr addr; /* depreciated */
- struct rte_pci_id id; /* depreciated */
+ uint8_t rsvd[20];
__extension__
uint8_t force_bind : 1; /* Flag to bind kernel thread */
--
2.17.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 4/6] eal: update rte_eal_wait_lcore definition
@ 2021-09-09 23:37 3% ` Honnappa Nagarahalli
0 siblings, 0 replies; 200+ results
From: Honnappa Nagarahalli @ 2021-09-09 23:37 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev, konstantin.ananyev, david.marchand,
Feifei Wang, techboard
Cc: Ruifeng Wang, nd, Feifei Wang (Arm Technology China), nd
+ Techboard.
This is the API (not ABI as I mentioned) compatibility breakage that I mentioned during the last Techboard meeting.
The removal of the FINISHED state itself was announced. This is a change related to that. However, the specific API change was not called out in the deprecation notice.
> -----Original Message-----
> From: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Sent: Thursday, September 9, 2021 6:13 PM
> To: dev@dpdk.org; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; konstantin.ananyev@intel.com;
> david.marchand@redhat.com; Feifei Wang <Feifei.Wang2@arm.com>
> Cc: Ruifeng Wang <Ruifeng.Wang@arm.com>; nd <nd@arm.com>; Feifei
> Wang (Arm Technology China) <Feifei.Wang@arm.com>
> Subject: [PATCH v2 4/6] eal: update rte_eal_wait_lcore definition
>
> Since the FINISHED state is removed, the API rte_eal_wait_lcore is updated to
> always return the status of the last function that ran in the worker core.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Feifei Wang <feifei.wang@arm.com>
> ---
> lib/eal/common/eal_common_launch.c | 6 ++----
> lib/eal/include/rte_launch.h | 12 +++++-------
> 2 files changed, 7 insertions(+), 11 deletions(-)
>
> diff --git a/lib/eal/common/eal_common_launch.c
> b/lib/eal/common/eal_common_launch.c
> index 78fd940267..4bc842417a 100644
> --- a/lib/eal/common/eal_common_launch.c
> +++ b/lib/eal/common/eal_common_launch.c
> @@ -23,10 +23,8 @@
> int
> rte_eal_wait_lcore(unsigned worker_id)
> {
> - if (lcore_config[worker_id].state == WAIT)
> - return 0;
> -
> - while (lcore_config[worker_id].state != WAIT)
> + while (__atomic_load_n(&lcore_config[worker_id].state,
> + __ATOMIC_ACQUIRE) != WAIT)
> rte_pause();
>
> rte_rmb();
> diff --git a/lib/eal/include/rte_launch.h b/lib/eal/include/rte_launch.h index
> ed0bb4762a..f2d386e6e2 100644
> --- a/lib/eal/include/rte_launch.h
> +++ b/lib/eal/include/rte_launch.h
> @@ -119,18 +119,16 @@ enum rte_lcore_state_t
> rte_eal_get_lcore_state(unsigned int worker_id);
> *
> * To be executed on the MAIN lcore only.
> *
> - * If the worker lcore identified by the worker_id is in a FINISHED state,
> - * switch to the WAIT state. If the lcore is in RUNNING state, wait until
> - * the lcore finishes its job and moves to the FINISHED state.
> + * If the lcore identified by the worker_id is in RUNNING state, wait
> + until
> + * the lcore finishes its job and moves to the WAIT state.
> *
> * @param worker_id
> * The identifier of the lcore.
> * @return
> - * - 0: If the lcore identified by the worker_id is in a WAIT state.
> + * - 0: If the remote launch function was never called on the lcore
> + * identified by the worker_id.
> * - The value that was returned by the previous remote launch
> - * function call if the lcore identified by the worker_id was in a
> - * FINISHED or RUNNING state. In this case, it changes the state
> - * of the lcore to WAIT.
> + * function call.
> */
> int rte_eal_wait_lcore(unsigned worker_id);
>
> --
> 2.25.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v6 09/10] doc: changes for new pcapng and dumpcap
2021-09-09 23:33 1% ` [dpdk-dev] [PATCH v6 07/10] pdump: support pcapng and filtering Stephen Hemminger
@ 2021-09-09 23:33 1% ` Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-09 23:33 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Describe the new packet capture library and utilities
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
.../howto/img/packet_capture_framework.svg | 96 +++++++++----------
doc/guides/howto/packet_capture_framework.rst | 67 ++++++-------
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/pcapng_lib.rst | 24 +++++
doc/guides/prog_guide/pdump_lib.rst | 28 ++++--
doc/guides/rel_notes/release_21_11.rst | 10 ++
doc/guides/tools/dumpcap.rst | 86 +++++++++++++++++
doc/guides/tools/index.rst | 1 +
10 files changed, 228 insertions(+), 87 deletions(-)
create mode 100644 doc/guides/prog_guide/pcapng_lib.rst
create mode 100644 doc/guides/tools/dumpcap.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 1992107a0356..ee07394d1c78 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -223,3 +223,4 @@ The public API headers are grouped by topics:
[experimental APIs] (@ref rte_compat.h),
[ABI versioning] (@ref rte_function_versioning.h),
[version] (@ref rte_version.h)
+ [pcapng] (@ref rte_pcapng.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 325a0195c6ab..aba17799a9a1 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -58,6 +58,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/metrics \
@TOPDIR@/lib/node \
@TOPDIR@/lib/net \
+ @TOPDIR@/lib/pcapng \
@TOPDIR@/lib/pci \
@TOPDIR@/lib/pdump \
@TOPDIR@/lib/pipeline \
diff --git a/doc/guides/howto/img/packet_capture_framework.svg b/doc/guides/howto/img/packet_capture_framework.svg
index a76baf71fdee..1c2646a81096 100644
--- a/doc/guides/howto/img/packet_capture_framework.svg
+++ b/doc/guides/howto/img/packet_capture_framework.svg
@@ -1,6 +1,4 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
<svg
xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
xmlns:dc="http://purl.org/dc/elements/1.1/"
@@ -16,8 +14,8 @@
viewBox="0 0 425.19685 283.46457"
id="svg2"
version="1.1"
- inkscape:version="0.91 r13725"
- sodipodi:docname="drawing-pcap.svg">
+ inkscape:version="1.0.2 (e86c870879, 2021-01-15)"
+ sodipodi:docname="packet_capture_framework.svg">
<defs
id="defs4">
<marker
@@ -228,7 +226,7 @@
x2="487.64606"
y2="258.38232"
gradientUnits="userSpaceOnUse"
- gradientTransform="translate(-84.916417,744.90779)" />
+ gradientTransform="matrix(1.1457977,0,0,0.99944907,-151.97019,745.05014)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient5784"
@@ -277,17 +275,18 @@
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
- inkscape:zoom="0.57434918"
- inkscape:cx="215.17857"
- inkscape:cy="285.26445"
+ inkscape:zoom="1"
+ inkscape:cx="226.77165"
+ inkscape:cy="78.124511"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
- inkscape:window-width="1874"
- inkscape:window-height="971"
- inkscape:window-x="2"
- inkscape:window-y="24"
- inkscape:window-maximized="0" />
+ inkscape:window-width="2560"
+ inkscape:window-height="1414"
+ inkscape:window-x="0"
+ inkscape:window-y="0"
+ inkscape:window-maximized="1"
+ inkscape:document-rotation="0" />
<metadata
id="metadata7">
<rdf:RDF>
@@ -296,7 +295,7 @@
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
- <dc:title></dc:title>
+ <dc:title />
</cc:Work>
</rdf:RDF>
</metadata>
@@ -321,15 +320,15 @@
y="790.82452" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="61.050636"
y="807.3205"
- id="text4152"
- sodipodi:linespacing="125%"><tspan
+ id="text4152"><tspan
sodipodi:role="line"
id="tspan4154"
x="61.050636"
- y="807.3205">DPDK Primary Application</tspan></text>
+ y="807.3205"
+ style="font-size:12.5px;line-height:1.25">DPDK Primary Application</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6"
@@ -339,19 +338,20 @@
y="827.01843" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="350.68585"
y="841.16058"
- id="text4189"
- sodipodi:linespacing="125%"><tspan
+ id="text4189"><tspan
sodipodi:role="line"
id="tspan4191"
x="350.68585"
- y="841.16058">dpdk-pdump</tspan><tspan
+ y="841.16058"
+ style="font-size:12.5px;line-height:1.25">dpdk-dumpcap</tspan><tspan
sodipodi:role="line"
x="350.68585"
y="856.78558"
- id="tspan4193">tool</tspan></text>
+ id="tspan4193"
+ style="font-size:12.5px;line-height:1.25">tool</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4"
@@ -361,15 +361,15 @@
y="891.16315" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="352.70612"
y="905.3053"
- id="text4189-1"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1"><tspan
sodipodi:role="line"
x="352.70612"
y="905.3053"
- id="tspan4193-3">PCAP PMD</tspan></text>
+ id="tspan4193-3"
+ style="font-size:12.5px;line-height:1.25">librte_pcapng</tspan></text>
<rect
style="fill:url(#linearGradient5745);fill-opacity:1;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-6"
@@ -379,15 +379,15 @@
y="923.9931" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="136.02846"
y="938.13525"
- id="text4189-0"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-0"><tspan
sodipodi:role="line"
x="136.02846"
y="938.13525"
- id="tspan4193-6">dpdk_port0</tspan></text>
+ id="tspan4193-6"
+ style="font-size:12.5px;line-height:1.25">dpdk_port0</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-5"
@@ -397,33 +397,33 @@
y="824.99817" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="137.54369"
y="839.14026"
- id="text4189-4"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-4"><tspan
sodipodi:role="line"
x="137.54369"
y="839.14026"
- id="tspan4193-2">librte_pdump</tspan></text>
+ id="tspan4193-2"
+ style="font-size:12.5px;line-height:1.25">librte_pdump</tspan></text>
<rect
- style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+ style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1.07013;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4-5"
- width="94.449265"
- height="35.355339"
- x="307.7804"
- y="985.61243" />
+ width="108.21974"
+ height="35.335861"
+ x="297.9809"
+ y="985.62219" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="352.70618"
y="999.75458"
- id="text4189-1-8"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1-8"><tspan
sodipodi:role="line"
x="352.70618"
y="999.75458"
- id="tspan4193-3-2">capture.pcap</tspan></text>
+ id="tspan4193-3-2"
+ style="font-size:12.5px;line-height:1.25">capture.pcapng</tspan></text>
<rect
style="fill:url(#linearGradient5788-1);fill-opacity:1;stroke:#257cdc;stroke-width:1.12555885;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4-5-1"
@@ -433,15 +433,15 @@
y="983.14984" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="136.53352"
y="1002.785"
- id="text4189-1-8-4"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1-8-4"><tspan
sodipodi:role="line"
x="136.53352"
y="1002.785"
- id="tspan4193-3-2-7">Traffic Generator</tspan></text>
+ id="tspan4193-3-2-7"
+ style="font-size:12.5px;line-height:1.25">Traffic Generator</tspan></text>
<path
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker7331)"
d="m 351.46948,927.02357 c 0,57.5787 0,57.5787 0,57.5787"
diff --git a/doc/guides/howto/packet_capture_framework.rst b/doc/guides/howto/packet_capture_framework.rst
index c31bac52340e..78baa609a021 100644
--- a/doc/guides/howto/packet_capture_framework.rst
+++ b/doc/guides/howto/packet_capture_framework.rst
@@ -1,18 +1,19 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright(c) 2017 Intel Corporation.
-DPDK pdump Library and pdump Tool
-=================================
+DPDK packet capture libraries and tools
+=======================================
This document describes how the Data Plane Development Kit (DPDK) Packet
Capture Framework is used for capturing packets on DPDK ports. It is intended
for users of DPDK who want to know more about the Packet Capture feature and
for those who want to monitor traffic on DPDK-controlled devices.
-The DPDK packet capture framework was introduced in DPDK v16.07. The DPDK
-packet capture framework consists of the DPDK pdump library and DPDK pdump
-tool.
-
+The DPDK packet capture framework was introduced in DPDK v16.07 and
+enhanced in 21.1. The DPDK packet capture framework consists of the
+libraries for collecting packets ``librte_pdump`` and writing packets
+to a file ``librte_pcapng``. There are two sample applications:
+``dpdk-dumpcap`` and older ``dpdk-pdump``.
Introduction
------------
@@ -22,43 +23,46 @@ allow users to initialize the packet capture framework and to enable or
disable packet capture. The library works on a multi process communication model and its
usage is recommended for debugging purposes.
-The :ref:`dpdk-pdump <pdump_tool>` tool is developed based on the
-``librte_pdump`` library. It runs as a DPDK secondary process and is capable
-of enabling or disabling packet capture on DPDK ports. The ``dpdk-pdump`` tool
-provides command-line options with which users can request enabling or
-disabling of the packet capture on DPDK ports.
+The :ref:`librte_pcapng <pcapng_library>` library provides the APIs to format
+packets and write them to a file in Pcapng format.
+
+
+The :ref:`dpdk-dumpcap <dumpcap_tool>` is a tool that captures packets in
+like Wireshark dumpcap does for Linux. It runs as a DPDK secondary process and
+captures packets from one or more interfaces and writes them to a file
+in Pcapng format. The ``dpdk-dumpcap`` tool is designed to take
+most of the same options as the Wireshark ``dumpcap`` command.
-The application which initializes the packet capture framework will be a primary process
-and the application that enables or disables the packet capture will
-be a secondary process. The primary process sends the Rx and Tx packets from the DPDK ports
-to the secondary process.
+Without any options it will use the packet capture framework to
+capture traffic from the first available DPDK port.
In DPDK the ``testpmd`` application can be used to initialize the packet
-capture framework and acts as a server, and the ``dpdk-pdump`` tool acts as a
+capture framework and acts as a server, and the ``dpdk-dumpcap`` tool acts as a
client. To view Rx or Tx packets of ``testpmd``, the application should be
-launched first, and then the ``dpdk-pdump`` tool. Packets from ``testpmd``
-will be sent to the tool, which then sends them on to the Pcap PMD device and
-that device writes them to the Pcap file or to an external interface depending
-on the command-line option used.
+launched first, and then the ``dpdk-dumpcap`` tool. Packets from ``testpmd``
+will be sent to the tool, and then to the Pcapng file.
Some things to note:
-* The ``dpdk-pdump`` tool can only be used in conjunction with a primary
+* All tools using ``librte_pdump`` can only be used in conjunction with a primary
application which has the packet capture framework initialized already. In
dpdk, only ``testpmd`` is modified to initialize packet capture framework,
- other applications remain untouched. So, if the ``dpdk-pdump`` tool has to
+ other applications remain untouched. So, if the ``dpdk-dumpcap`` tool has to
be used with any application other than the testpmd, the user needs to
explicitly modify that application to call the packet capture framework
initialization code. Refer to the ``app/test-pmd/testpmd.c`` code and look
for ``pdump`` keyword to see how this is done.
-* The ``dpdk-pdump`` tool depends on the libpcap based PMD.
+* The ``dpdk-pdump`` tool is an older tool created as demonstration of ``librte_pdump``
+ library. The ``dpdk-pdump`` tool provides more limited functionality and
+ and depends on the Pcap PMD. It is retained only for compatibility reasons;
+ users should use ``dpdk-dumpcap`` instead.
Test Environment
----------------
-The overview of using the Packet Capture Framework and the ``dpdk-pdump`` tool
+The overview of using the Packet Capture Framework and the ``dpdk-dumpcap`` utility
for packet capturing on the DPDK port in
:numref:`figure_packet_capture_framework`.
@@ -66,13 +70,13 @@ for packet capturing on the DPDK port in
.. figure:: img/packet_capture_framework.*
- Packet capturing on a DPDK port using the dpdk-pdump tool.
+ Packet capturing on a DPDK port using the dpdk-dumpcap utility.
Running the Application
-----------------------
-The following steps demonstrate how to run the ``dpdk-pdump`` tool to capture
+The following steps demonstrate how to run the ``dpdk-dumpcap`` tool to capture
Rx side packets on dpdk_port0 in :numref:`figure_packet_capture_framework` and
inspect them using ``tcpdump``.
@@ -80,16 +84,15 @@ inspect them using ``tcpdump``.
sudo <build_dir>/app/dpdk-testpmd -c 0xf0 -n 4 -- -i --port-topology=chained
-#. Launch the pdump tool as follows::
+#. Launch the dpdk-dump as follows::
- sudo <build_dir>/app/dpdk-pdump -- \
- --pdump 'port=0,queue=*,rx-dev=/tmp/capture.pcap'
+ sudo <build_dir>/app/dpdk-dumpcap -w /tmp/capture.pcapng
#. Send traffic to dpdk_port0 from traffic generator.
- Inspect packets captured in the file capture.pcap using a tool
- that can interpret Pcap files, for example tcpdump::
+ Inspect packets captured in the file capture.pcap using a tool such as
+ tcpdump or tshark that can interpret Pcapng files::
- $tcpdump -nr /tmp/capture.pcap
+ $ tcpdump -nr /tmp/capture.pcapng
reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)
11:11:36.891404 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
11:11:36.891442 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 2dce507f46a3..b440c77c2ba1 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -43,6 +43,7 @@ Programmer's Guide
ip_fragment_reassembly_lib
generic_receive_offload_lib
generic_segmentation_offload_lib
+ pcapng_lib
pdump_lib
multi_proc_support
kernel_nic_interface
diff --git a/doc/guides/prog_guide/pcapng_lib.rst b/doc/guides/prog_guide/pcapng_lib.rst
new file mode 100644
index 000000000000..36379b530a57
--- /dev/null
+++ b/doc/guides/prog_guide/pcapng_lib.rst
@@ -0,0 +1,24 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2016 Intel Corporation.
+
+.. _pcapng_library:
+
+Packet Capture File Writer
+==========================
+
+Pcapng is a library for creating files in Pcapng file format.
+The Pcapng file format is the default capture file format for modern
+network capture processing tools. It can be read by wireshark and tcpdump.
+
+Usage
+-----
+
+Before the library can be used the function ``rte_pcapng_init``
+should be called once to initialize timestamp computation.
+
+
+References
+----------
+* Draft RFC https://www.ietf.org/id/draft-tuexen-opsawg-pcapng-03.html
+
+* Project repository https://github.com/pcapng/pcapng/
diff --git a/doc/guides/prog_guide/pdump_lib.rst b/doc/guides/prog_guide/pdump_lib.rst
index 62c0b015b2fe..9af91415e5ea 100644
--- a/doc/guides/prog_guide/pdump_lib.rst
+++ b/doc/guides/prog_guide/pdump_lib.rst
@@ -3,10 +3,10 @@
.. _pdump_library:
-The librte_pdump Library
-========================
+The Packet Capture Library
+==========================
-The ``librte_pdump`` library provides a framework for packet capturing in DPDK.
+The DPDK ``pdump`` library provides a framework for packet capturing in DPDK.
The library does the complete copy of the Rx and Tx mbufs to a new mempool and
hence it slows down the performance of the applications, so it is recommended
to use this library for debugging purposes.
@@ -23,11 +23,19 @@ or disable the packet capture, and to uninitialize it.
* ``rte_pdump_enable()``:
This API enables the packet capture on a given port and queue.
- Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf()``
+ This API enables the packet capture on a given port and queue.
+ It also allows setting an optional filter using DPDK BPF interpreter and
+ setting the captured packet length.
* ``rte_pdump_enable_by_deviceid()``:
This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
- Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf_by_deviceid()``
+ This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
+ It also allows seating an optional filter using DPDK BPF interpreter and
+ setting the captured packet length.
* ``rte_pdump_disable()``:
This API disables the packet capture on a given port and queue.
@@ -61,6 +69,12 @@ and enables the packet capture by registering the Ethernet RX and TX callbacks f
and queue combinations. Then the primary process will mirror the packets to the new mempool and enqueue them to
the rte_ring that secondary process have passed to these APIs.
+The packet ring supports one of two formats. The default format enqueues copies of the original packets
+into the rte_ring. If the ``RTE_PDUMP_FLAG_PCAPNG`` is set the mbuf data is extended with header and trailer
+to match the format of Pcapng enhanced packet block. The enhanced packet block has meta-data such as the
+timestamp, port and queue the packet was captured on. It is up to the application consuming the
+packets from the ring to select the format desired.
+
The library APIs ``rte_pdump_disable()`` and ``rte_pdump_disable_by_deviceid()`` disables the packet capture.
For the calls to these APIs from secondary process, the library creates the "pdump disable" request and sends
the request to the primary process over the multi process channel. The primary process takes this request and
@@ -74,5 +88,5 @@ function.
Use Case: Packet Capturing
--------------------------
-The DPDK ``app/pdump`` tool is developed based on this library to capture packets in DPDK.
-Users can use this as an example to develop their own packet capturing tools.
+The DPDK ``app/dpdk-dumpcap`` utility uses this library
+to capture packets in DPDK.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 675b5738348b..ee24cbfdb99d 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -62,6 +62,16 @@ New Features
* Added bus-level parsing of the devargs syntax.
* Kept compatibility with the legacy syntax as parsing fallback.
+* **Enhance Packet capture.**
+
+ * New dpdk-dumpcap program that has most of the features of the
+ wireshark dumpcap utility including capture of multiple interfaces,
+ stopping after number of bytes, packets.
+ * New library for writing pcapng packet capture files.
+ * Enhancement to the pdump library to support:
+ * Packet filter with BPF.
+ * Pcapng format with timestamps and meta-data.
+ * Fixes packet capture with stripped VLAN tags.
Removed Items
-------------
diff --git a/doc/guides/tools/dumpcap.rst b/doc/guides/tools/dumpcap.rst
new file mode 100644
index 000000000000..664ea0c79802
--- /dev/null
+++ b/doc/guides/tools/dumpcap.rst
@@ -0,0 +1,86 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2020 Microsoft Corporation.
+
+.. _dumpcap_tool:
+
+dpdk-dumpcap Application
+========================
+
+The ``dpdk-dumpcap`` tool is a Data Plane Development Kit (DPDK)
+network traffic dump tool. The interface is similar to the dumpcap tool in Wireshark.
+It runs as a secondary DPDK process and lets you capture packets that are
+coming into and out of a DPDK primary process.
+The ``dpdk-dumpcap`` writes files in Pcapng packet format using
+capture file format is pcapng.
+
+Without any options set it will use DPDK to capture traffic from the first
+available DPDK interface and write the received raw packet data, along
+with timestamps into a pcapng file.
+
+If the ``-w`` option is not specified, ``dpdk-dumpcap`` writes to a newly
+create file with a name chosen based on interface name and timestamp.
+If ``-w`` option is specified, then that file is used.
+
+ .. Note::
+ * The ``dpdk-dumpcap`` tool can only be used in conjunction with a primary
+ application which has the packet capture framework initialized already.
+ In dpdk, only the ``testpmd`` is modified to initialize packet capture
+ framework, other applications remain untouched. So, if the ``dpdk-dumpcap``
+ tool has to be used with any application other than the testpmd, user
+ needs to explicitly modify that application to call packet capture
+ framework initialization code. Refer ``app/test-pmd/testpmd.c``
+ code to see how this is done.
+
+ * The ``dpdk-dumpcap`` tool runs as a DPDK secondary process. It exits when
+ the primary application exits.
+
+
+Running the Application
+-----------------------
+
+To list interfaces available for capture use ``--list-interfaces``.
+
+To filter packets in style of *tshark* use the ``-f`` flag.
+
+To capture on multiple interfaces at once, use multiple ``-I`` flags.
+
+Example
+-------
+
+.. code-block:: console
+
+ # ./<build_dir>/app/dpdk-dumpcap --list-interfaces
+ 0. 000:00:03.0
+ 1. 000:00:03.1
+
+ # ./<build_dir>/app/dpdk-dumpcap -I 0000:00:03.0 -c 6 -w /tmp/sample.pcapng
+ Packets captured: 6
+ Packets received/dropped on interface '0000:00:03.0' 6/0
+
+ # ./<build_dir>/app/dpdk-dumpcap -f 'tcp port 80'
+ Packets captured: 6
+ Packets received/dropped on interface '0000:00:03.0' 10/8
+
+
+Limitations
+-----------
+The following option of Wireshark ``dumpcap`` is not yet implemented:
+
+ * ``-b|--ring-buffer`` -- more complex file management.
+
+The following options do not make sense in the context of DPDK.
+
+ * ``-C <byte_limit>`` -- its a kernel thing
+
+ * ``-t`` -- use a thread per interface
+
+ * Timestamp type.
+
+ * Link data types. Only EN10MB (Ethernet) is supported.
+
+ * Wireless related options: ``-I|--monitor-mode`` and ``-k <freq>``
+
+
+.. Note::
+ * The options to ``dpdk-dumpcap`` are like the Wireshark dumpcap program and
+ are not the same as ``dpdk-pdump`` and other DPDK applications.
diff --git a/doc/guides/tools/index.rst b/doc/guides/tools/index.rst
index 93dde4148e90..b71c12b8f2dd 100644
--- a/doc/guides/tools/index.rst
+++ b/doc/guides/tools/index.rst
@@ -8,6 +8,7 @@ DPDK Tools User Guides
:maxdepth: 2
:numbered:
+ dumpcap
proc_info
pdump
pmdinfo
--
2.30.2
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v6 07/10] pdump: support pcapng and filtering
@ 2021-09-09 23:33 1% ` Stephen Hemminger
2021-09-09 23:33 1% ` [dpdk-dev] [PATCH v6 09/10] doc: changes for new pcapng and dumpcap Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-09 23:33 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
This enhances the DPDK pdump library to support new
pcapng format and filtering via BPF.
The internal client/server protocol is changed to support
two versions: the original pdump basic version and a
new pcapng version.
The internal version number (not part of exposed API or ABI)
is intentionally increased to cause any attempt to try
mismatched primary/secondary process to fail.
Add new API to do allow filtering of captured packets with
DPDK BPF (eBPF) filter program. It keeps statistics
on packets captured, filtered, and missed (because ring was full).
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
lib/meson.build | 4 +-
lib/pdump/meson.build | 2 +-
lib/pdump/rte_pdump.c | 437 ++++++++++++++++++++++++++++++------------
lib/pdump/rte_pdump.h | 110 ++++++++++-
lib/pdump/version.map | 8 +
5 files changed, 435 insertions(+), 126 deletions(-)
diff --git a/lib/meson.build b/lib/meson.build
index ba88e9eabc58..1da521ea6185 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -26,6 +26,7 @@ libraries = [
'timer', # eventdev depends on this
'acl',
'bbdev',
+ 'bpf',
'bitratestats',
'cfgfile',
'compressdev',
@@ -43,7 +44,6 @@ libraries = [
'member',
'pcapng',
'power',
- 'pdump',
'rawdev',
'regexdev',
'rib',
@@ -55,10 +55,10 @@ libraries = [
'ipsec', # ipsec lib depends on net, crypto and security
'fib', #fib lib depends on rib
'port', # pkt framework libs which use other libs from above
+ 'pdump', # pdump lib depends on bpf pcapng
'table',
'pipeline',
'flow_classify', # flow_classify lib depends on pkt framework table lib
- 'bpf',
'graph',
'node',
]
diff --git a/lib/pdump/meson.build b/lib/pdump/meson.build
index 3a95eabde6a6..51ceb2afdec5 100644
--- a/lib/pdump/meson.build
+++ b/lib/pdump/meson.build
@@ -3,4 +3,4 @@
sources = files('rte_pdump.c')
headers = files('rte_pdump.h')
-deps += ['ethdev']
+deps += ['ethdev', 'bpf', 'pcapng']
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 382217bc1564..f2047ad9f001 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -7,8 +7,10 @@
#include <rte_ethdev.h>
#include <rte_lcore.h>
#include <rte_log.h>
+#include <rte_memzone.h>
#include <rte_errno.h>
#include <rte_string_fns.h>
+#include <rte_pcapng.h>
#include "rte_pdump.h"
@@ -27,30 +29,26 @@ enum pdump_operation {
ENABLE = 2
};
+/*
+ * Note: version numbers intentionally start at 3
+ * in order to catch any application built with older out
+ * version of DPDK using incompatible client request format.
+ */
enum pdump_version {
- V1 = 1
+ PDUMP_CLIENT_LEGACY = 3,
+ PDUMP_CLIENT_PCAPNG = 4,
};
struct pdump_request {
uint16_t ver;
uint16_t op;
- uint32_t flags;
- union pdump_data {
- struct enable_v1 {
- char device[RTE_DEV_NAME_MAX_LEN];
- uint16_t queue;
- struct rte_ring *ring;
- struct rte_mempool *mp;
- void *filter;
- } en_v1;
- struct disable_v1 {
- char device[RTE_DEV_NAME_MAX_LEN];
- uint16_t queue;
- struct rte_ring *ring;
- struct rte_mempool *mp;
- void *filter;
- } dis_v1;
- } data;
+ uint16_t flags;
+ uint16_t queue;
+ struct rte_ring *ring;
+ struct rte_mempool *mp;
+ const struct rte_bpf_prm *prm;
+ uint32_t snaplen;
+ char device[RTE_DEV_NAME_MAX_LEN];
};
struct pdump_response {
@@ -63,80 +61,140 @@ static struct pdump_rxtx_cbs {
struct rte_ring *ring;
struct rte_mempool *mp;
const struct rte_eth_rxtx_callback *cb;
- void *filter;
+ const struct rte_bpf *filter;
+ enum pdump_version ver;
+ uint32_t snaplen;
} rx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
tx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
-
-static inline void
-pdump_copy(struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
+static const char *MZ_RTE_PDUMP_STATS = "rte_pdump_stats";
+
+/* Shared memory between primary and secondary processes. */
+static struct {
+ struct rte_pdump_stats rx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+ struct rte_pdump_stats tx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+} *pdump_stats;
+
+/* Create a clone of mbuf to be placed into ring. */
+static void
+pdump_copy(uint16_t port_id, uint16_t queue,
+ enum rte_pcapng_direction direction,
+ struct rte_mbuf **pkts, uint16_t nb_pkts,
+ const struct pdump_rxtx_cbs *cbs,
+ struct rte_pdump_stats *stats)
{
unsigned int i;
int ring_enq;
uint16_t d_pkts = 0;
struct rte_mbuf *dup_bufs[nb_pkts];
- struct pdump_rxtx_cbs *cbs;
+ uint64_t ts;
struct rte_ring *ring;
struct rte_mempool *mp;
struct rte_mbuf *p;
+ uint64_t rcs[nb_pkts];
+
+ if (cbs->filter &&
+ rte_bpf_exec_burst(cbs->filter, (void **)pkts, rcs, nb_pkts) == 0) {
+ /* All packets were filtered out */
+ __atomic_fetch_add(&stats->filtered, nb_pkts,
+ __ATOMIC_RELAXED);
+ return;
+ }
- cbs = user_params;
+ ts = rte_get_tsc_cycles();
ring = cbs->ring;
mp = cbs->mp;
for (i = 0; i < nb_pkts; i++) {
- p = rte_pktmbuf_copy(pkts[i], mp, 0, UINT32_MAX);
- if (p)
+ /*
+ * Similar behavior to rte_bpf_eth callback.
+ * if BPF program returns zero value for a given packet,
+ * then it will be ignored.
+ */
+ if (cbs->filter && rcs[i] == 0) {
+ __atomic_fetch_add(&stats->filtered,
+ 1, __ATOMIC_RELAXED);
+ continue;
+ }
+
+ /*
+ * If using pcapng then want to wrap packets
+ * otherwise a simple copy.
+ */
+ if (cbs->ver == PDUMP_CLIENT_PCAPNG)
+ p = rte_pcapng_copy(port_id, queue,
+ pkts[i], mp, cbs->snaplen,
+ ts, direction);
+ else
+ p = rte_pktmbuf_copy(pkts[i], mp, 0, cbs->snaplen);
+
+ if (unlikely(p == NULL))
+ __atomic_fetch_add(&stats->nombuf, 1, __ATOMIC_RELAXED);
+ else
dup_bufs[d_pkts++] = p;
}
+ __atomic_fetch_add(&stats->accepted, d_pkts, __ATOMIC_RELAXED);
+
ring_enq = rte_ring_enqueue_burst(ring, (void *)dup_bufs, d_pkts, NULL);
if (unlikely(ring_enq < d_pkts)) {
unsigned int drops = d_pkts - ring_enq;
- PDUMP_LOG(DEBUG,
- "only %d of packets enqueued to ring\n", ring_enq);
+ __atomic_fetch_add(&stats->ringfull, drops, __ATOMIC_RELAXED);
rte_pktmbuf_free_bulk(&dup_bufs[ring_enq], drops);
}
}
static uint16_t
-pdump_rx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_rx(uint16_t port, uint16_t queue,
struct rte_mbuf **pkts, uint16_t nb_pkts,
- uint16_t max_pkts __rte_unused,
- void *user_params)
+ uint16_t max_pkts __rte_unused, void *user_params)
{
- pdump_copy(pkts, nb_pkts, user_params);
+ const struct pdump_rxtx_cbs *cbs = user_params;
+ struct rte_pdump_stats *stats = &pdump_stats->rx[port][queue];
+
+ pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_IN,
+ pkts, nb_pkts, cbs, stats);
return nb_pkts;
}
static uint16_t
-pdump_tx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_tx(uint16_t port, uint16_t queue,
struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
{
- pdump_copy(pkts, nb_pkts, user_params);
+ const struct pdump_rxtx_cbs *cbs = user_params;
+ struct rte_pdump_stats *stats = &pdump_stats->tx[port][queue];
+
+ pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_OUT,
+ pkts, nb_pkts, cbs, stats);
return nb_pkts;
}
static int
-pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
- struct rte_ring *ring, struct rte_mempool *mp,
- uint16_t operation)
+pdump_register_rx_callbacks(enum pdump_version ver,
+ uint16_t end_q, uint16_t port, uint16_t queue,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ struct rte_bpf *filter,
+ uint16_t operation, uint32_t snaplen)
{
uint16_t qid;
- struct pdump_rxtx_cbs *cbs = NULL;
qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
for (; qid < end_q; qid++) {
- cbs = &rx_cbs[port][qid];
- if (cbs && operation == ENABLE) {
+ struct pdump_rxtx_cbs *cbs = &rx_cbs[port][qid];
+
+ if (operation == ENABLE) {
if (cbs->cb) {
PDUMP_LOG(ERR,
"rx callback for port=%d queue=%d, already exists\n",
port, qid);
return -EEXIST;
}
+ cbs->ver = ver;
cbs->ring = ring;
cbs->mp = mp;
+ cbs->snaplen = snaplen;
+ cbs->filter = filter;
+
cbs->cb = rte_eth_add_first_rx_callback(port, qid,
pdump_rx, cbs);
if (cbs->cb == NULL) {
@@ -145,8 +203,7 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
rte_errno);
return rte_errno;
}
- }
- if (cbs && operation == DISABLE) {
+ } else if (operation == DISABLE) {
int ret;
if (cbs->cb == NULL) {
@@ -170,26 +227,32 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
}
static int
-pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
- struct rte_ring *ring, struct rte_mempool *mp,
- uint16_t operation)
+pdump_register_tx_callbacks(enum pdump_version ver,
+ uint16_t end_q, uint16_t port, uint16_t queue,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ struct rte_bpf *filter,
+ uint16_t operation, uint32_t snaplen)
{
uint16_t qid;
- struct pdump_rxtx_cbs *cbs = NULL;
qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
for (; qid < end_q; qid++) {
- cbs = &tx_cbs[port][qid];
- if (cbs && operation == ENABLE) {
+ struct pdump_rxtx_cbs *cbs = &tx_cbs[port][qid];
+
+ if (operation == ENABLE) {
if (cbs->cb) {
PDUMP_LOG(ERR,
"tx callback for port=%d queue=%d, already exists\n",
port, qid);
return -EEXIST;
}
+ cbs->ver = ver;
cbs->ring = ring;
cbs->mp = mp;
+ cbs->snaplen = snaplen;
+ cbs->filter = filter;
+
cbs->cb = rte_eth_add_tx_callback(port, qid, pdump_tx,
cbs);
if (cbs->cb == NULL) {
@@ -198,8 +261,7 @@ pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
rte_errno);
return rte_errno;
}
- }
- if (cbs && operation == DISABLE) {
+ } else if (operation == DISABLE) {
int ret;
if (cbs->cb == NULL) {
@@ -228,37 +290,47 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
uint16_t nb_rx_q = 0, nb_tx_q = 0, end_q, queue;
uint16_t port;
int ret = 0;
+ struct rte_bpf *filter = NULL;
uint32_t flags;
uint16_t operation;
struct rte_ring *ring;
struct rte_mempool *mp;
- flags = p->flags;
- operation = p->op;
- if (operation == ENABLE) {
- ret = rte_eth_dev_get_port_by_name(p->data.en_v1.device,
- &port);
- if (ret < 0) {
+ if (!(p->ver == PDUMP_CLIENT_LEGACY ||
+ p->ver == PDUMP_CLIENT_PCAPNG)) {
+ PDUMP_LOG(ERR,
+ "incorrect client version %u\n", p->ver);
+ return -EINVAL;
+ }
+
+ if (p->prm) {
+ if (p->prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF) {
PDUMP_LOG(ERR,
- "failed to get port id for device id=%s\n",
- p->data.en_v1.device);
+ "invalid BPF program type: %u\n",
+ p->prm->prog_arg.type);
return -EINVAL;
}
- queue = p->data.en_v1.queue;
- ring = p->data.en_v1.ring;
- mp = p->data.en_v1.mp;
- } else {
- ret = rte_eth_dev_get_port_by_name(p->data.dis_v1.device,
- &port);
- if (ret < 0) {
- PDUMP_LOG(ERR,
- "failed to get port id for device id=%s\n",
- p->data.dis_v1.device);
- return -EINVAL;
+
+ filter = rte_bpf_load(p->prm);
+ if (filter == NULL) {
+ PDUMP_LOG(ERR, "cannot load BPF filter: %s\n",
+ rte_strerror(rte_errno));
+ return -rte_errno;
}
- queue = p->data.dis_v1.queue;
- ring = p->data.dis_v1.ring;
- mp = p->data.dis_v1.mp;
+ }
+
+ flags = p->flags;
+ operation = p->op;
+ queue = p->queue;
+ ring = p->ring;
+ mp = p->mp;
+
+ ret = rte_eth_dev_get_port_by_name(p->device, &port);
+ if (ret < 0) {
+ PDUMP_LOG(ERR,
+ "failed to get port id for device id=%s\n",
+ p->device);
+ return -EINVAL;
}
/* validation if packet capture is for all queues */
@@ -296,8 +368,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
/* register RX callback */
if (flags & RTE_PDUMP_FLAG_RX) {
end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_rx_q : queue + 1;
- ret = pdump_register_rx_callbacks(end_q, port, queue, ring, mp,
- operation);
+ ret = pdump_register_rx_callbacks(p->ver, end_q, port, queue,
+ ring, mp, filter,
+ operation, p->snaplen);
if (ret < 0)
return ret;
}
@@ -305,8 +378,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
/* register TX callback */
if (flags & RTE_PDUMP_FLAG_TX) {
end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_tx_q : queue + 1;
- ret = pdump_register_tx_callbacks(end_q, port, queue, ring, mp,
- operation);
+ ret = pdump_register_tx_callbacks(p->ver, end_q, port, queue,
+ ring, mp, filter,
+ operation, p->snaplen);
if (ret < 0)
return ret;
}
@@ -347,8 +421,18 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
int
rte_pdump_init(void)
{
+ const struct rte_memzone *mz;
int ret;
+ mz = rte_memzone_reserve(MZ_RTE_PDUMP_STATS, sizeof(*pdump_stats),
+ rte_socket_id(), 0);
+ if (mz == NULL) {
+ PDUMP_LOG(ERR, "cannot allocate pdump statistics\n");
+ rte_errno = ENOMEM;
+ return -1;
+ }
+ pdump_stats = mz->addr;
+
ret = rte_mp_action_register(PDUMP_MP, pdump_server);
if (ret && rte_errno != ENOTSUP)
return -1;
@@ -392,14 +476,21 @@ pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp)
static int
pdump_validate_flags(uint32_t flags)
{
- if (flags != RTE_PDUMP_FLAG_RX && flags != RTE_PDUMP_FLAG_TX &&
- flags != RTE_PDUMP_FLAG_RXTX) {
+ if ((flags & RTE_PDUMP_FLAG_RXTX) == 0) {
PDUMP_LOG(ERR,
"invalid flags, should be either rx/tx/rxtx\n");
rte_errno = EINVAL;
return -1;
}
+ /* mask off the flags we know about */
+ if (flags & ~(RTE_PDUMP_FLAG_RXTX | RTE_PDUMP_FLAG_PCAPNG)) {
+ PDUMP_LOG(ERR,
+ "unknown flags: %#x\n", flags);
+ rte_errno = ENOTSUP;
+ return -1;
+ }
+
return 0;
}
@@ -426,12 +517,12 @@ pdump_validate_port(uint16_t port, char *name)
}
static int
-pdump_prepare_client_request(char *device, uint16_t queue,
- uint32_t flags,
- uint16_t operation,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+pdump_prepare_client_request(const char *device, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ uint16_t operation,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
{
int ret = -1;
struct rte_mp_msg mp_req, *mp_rep;
@@ -440,23 +531,23 @@ pdump_prepare_client_request(char *device, uint16_t queue,
struct pdump_request *req = (struct pdump_request *)mp_req.param;
struct pdump_response *resp;
- req->ver = 1;
- req->flags = flags;
+ memset(req, 0, sizeof(*req));
+ if (flags & RTE_PDUMP_FLAG_PCAPNG)
+ req->ver = PDUMP_CLIENT_PCAPNG;
+ else
+ req->ver = PDUMP_CLIENT_LEGACY;
+
+ req->flags = flags & RTE_PDUMP_FLAG_RXTX;
req->op = operation;
+ req->queue = queue;
+ strlcpy(req->device, device, sizeof(req->device));
+
if ((operation & ENABLE) != 0) {
- strlcpy(req->data.en_v1.device, device,
- sizeof(req->data.en_v1.device));
- req->data.en_v1.queue = queue;
- req->data.en_v1.ring = ring;
- req->data.en_v1.mp = mp;
- req->data.en_v1.filter = filter;
- } else {
- strlcpy(req->data.dis_v1.device, device,
- sizeof(req->data.dis_v1.device));
- req->data.dis_v1.queue = queue;
- req->data.dis_v1.ring = NULL;
- req->data.dis_v1.mp = NULL;
- req->data.dis_v1.filter = NULL;
+ req->queue = queue;
+ req->ring = ring;
+ req->mp = mp;
+ req->prm = prm;
+ req->snaplen = snaplen;
}
strlcpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
@@ -477,11 +568,17 @@ pdump_prepare_client_request(char *device, uint16_t queue,
return ret;
}
-int
-rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+/*
+ * There are two versions of this function, because although original API
+ * left place holder for future filter, it never checked the value.
+ * Therefore the API can't depend on application passing a non
+ * bogus value.
+ */
+static int
+pdump_enable(uint16_t port, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
{
int ret;
char name[RTE_DEV_NAME_MAX_LEN];
@@ -496,20 +593,42 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(name, queue, flags,
- ENABLE, ring, mp, filter);
+ if (snaplen == 0)
+ snaplen = UINT32_MAX;
- return ret;
+ return pdump_prepare_client_request(name, queue, flags, snaplen,
+ ENABLE, ring, mp, prm);
}
int
-rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
- uint32_t flags,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ void *filter __rte_unused)
{
- int ret = 0;
+ return pdump_enable(port, queue, flags, 0,
+ ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf(uint16_t port, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ return pdump_enable(port, queue, flags, snaplen,
+ ring, mp, prm);
+}
+
+static int
+pdump_enable_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ int ret;
ret = pdump_validate_ring_mp(ring, mp);
if (ret < 0)
@@ -518,10 +637,30 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(device_id, queue, flags,
- ENABLE, ring, mp, filter);
+ return pdump_prepare_client_request(device_id, queue, flags, snaplen,
+ ENABLE, ring, mp, prm);
+}
- return ret;
+int
+rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
+ uint32_t flags,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ void *filter __rte_unused)
+{
+ return pdump_enable_by_deviceid(device_id, queue, flags, 0,
+ ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ return pdump_enable_by_deviceid(device_id, queue, flags, snaplen,
+ ring, mp, prm);
}
int
@@ -537,8 +676,8 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags)
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(name, queue, flags,
- DISABLE, NULL, NULL, NULL);
+ ret = pdump_prepare_client_request(name, queue, flags, 0,
+ DISABLE, NULL, NULL, NULL);
return ret;
}
@@ -553,8 +692,66 @@ rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(device_id, queue, flags,
- DISABLE, NULL, NULL, NULL);
+ ret = pdump_prepare_client_request(device_id, queue, flags, 0,
+ DISABLE, NULL, NULL, NULL);
return ret;
}
+
+static void
+pdump_sum_stats(uint16_t port, uint16_t nq,
+ struct rte_pdump_stats stats[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
+ struct rte_pdump_stats *total)
+{
+ uint64_t *sum = (uint64_t *)total;
+ unsigned int i;
+ uint64_t val;
+ uint16_t qid;
+
+ for (qid = 0; qid < nq; qid++) {
+ const uint64_t *perq = (const uint64_t *)&stats[port][qid];
+
+ for (i = 0; i < sizeof(*total) / sizeof(uint64_t); i++) {
+ val = __atomic_load_n(&perq[i], __ATOMIC_RELAXED);
+ sum[i] += val;
+ }
+ }
+}
+
+int
+rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats)
+{
+ struct rte_eth_dev_info dev_info;
+ const struct rte_memzone *mz;
+ int ret;
+
+ memset(stats, 0, sizeof(*stats));
+ ret = rte_eth_dev_info_get(port, &dev_info);
+ if (ret != 0) {
+ PDUMP_LOG(ERR,
+ "Error during getting device (port %u) info: %s\n",
+ port, strerror(-ret));
+ return ret;
+ }
+
+ if (pdump_stats == NULL) {
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ PDUMP_LOG(ERR,
+ "pdump not initialized\n");
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ mz = rte_memzone_lookup(MZ_RTE_PDUMP_STATS);
+ if (mz == NULL) {
+ PDUMP_LOG(ERR, "can not find pdump stats\n");
+ rte_errno = EINVAL;
+ return -1;
+ }
+ pdump_stats = mz->addr;
+ }
+
+ pdump_sum_stats(port, dev_info.nb_rx_queues, pdump_stats->rx, stats);
+ pdump_sum_stats(port, dev_info.nb_tx_queues, pdump_stats->tx, stats);
+ return 0;
+}
diff --git a/lib/pdump/rte_pdump.h b/lib/pdump/rte_pdump.h
index 6b00fc17aeb2..be3fd14c4bd3 100644
--- a/lib/pdump/rte_pdump.h
+++ b/lib/pdump/rte_pdump.h
@@ -15,6 +15,7 @@
#include <stdint.h>
#include <rte_mempool.h>
#include <rte_ring.h>
+#include <rte_bpf.h>
#ifdef __cplusplus
extern "C" {
@@ -26,7 +27,9 @@ enum {
RTE_PDUMP_FLAG_RX = 1, /* receive direction */
RTE_PDUMP_FLAG_TX = 2, /* transmit direction */
/* both receive and transmit directions */
- RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX)
+ RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX),
+
+ RTE_PDUMP_FLAG_PCAPNG = 4, /* format for pcapng */
};
/**
@@ -68,7 +71,7 @@ rte_pdump_uninit(void);
* @param mp
* mempool on to which original packets will be mirrored or duplicated.
* @param filter
- * place holder for packet filtering.
+ * Unused should be NULL.
*
* @return
* 0 on success, -1 on error, rte_errno is set accordingly.
@@ -80,6 +83,41 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
struct rte_mempool *mp,
void *filter);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given port and queue with filtering.
+ *
+ * @param port_id
+ * The Ethernet port on which packet capturing should be enabled.
+ * @param queue
+ * The queue on the Ethernet port which packet capturing
+ * should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ * queues of a given port.
+ * @param flags
+ * Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ * The upper limit on bytes to copy.
+ * Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ * The ring on which captured packets will be enqueued for user.
+ * @param mp
+ * The mempool on to which original packets will be mirrored or duplicated.
+ * @param prm
+ * Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ * 0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf(uint16_t port_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm);
+
/**
* Disables packet capturing on given port and queue.
*
@@ -118,7 +156,7 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags);
* @param mp
* mempool on to which original packets will be mirrored or duplicated.
* @param filter
- * place holder for packet filtering.
+ * unused should be NULL
*
* @return
* 0 on success, -1 on error, rte_errno is set accordingly.
@@ -131,6 +169,43 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
struct rte_mempool *mp,
void *filter);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given device id and queue with filtering.
+ * device_id can be name or pci address of device.
+ *
+ * @param device_id
+ * device id on which packet capturing should be enabled.
+ * @param queue
+ * The queue on the Ethernet port which packet capturing
+ * should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ * queues of a given port.
+ * @param flags
+ * Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ * The upper limit on bytes to copy.
+ * Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ * The ring on which captured packets will be enqueued for user.
+ * @param mp
+ * The mempool on to which original packets will be mirrored or duplicated.
+ * @param filter
+ * Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ * 0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *filter);
+
+
/**
* Disables packet capturing on given device_id and queue.
* device_id can be name or pci address of device.
@@ -153,6 +228,35 @@ int
rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
uint32_t flags);
+
+/**
+ * A structure used to retrieve statistics from packet capture.
+ * The statistics are sum of both receive and transmit queues.
+ */
+struct rte_pdump_stats {
+ uint64_t accepted; /**< Number of packets accepted by filter. */
+ uint64_t filtered; /**< Number of packets rejected by filter. */
+ uint64_t nombuf; /**< Number of mbuf allocation failures. */
+ uint64_t ringfull; /**< Number of missed packets due to ring full. */
+
+ uint64_t reserved[4]; /**< Reserved and pad to cache line */
+};
+
+/**
+ * Retrieve the packet capture statistics for a queue.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param stats
+ * A pointer to structure of type *rte_pdump_stats* to be filled in.
+ * @return
+ * Zero if successful. -1 on error and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_pdump_stats(uint16_t port_id, struct rte_pdump_stats *stats);
+
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/pdump/version.map b/lib/pdump/version.map
index f0a9d12c9a9e..ce5502d9cdf4 100644
--- a/lib/pdump/version.map
+++ b/lib/pdump/version.map
@@ -10,3 +10,11 @@ DPDK_22 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ rte_pdump_enable_bpf;
+ rte_pdump_enable_bpf_by_deviceid;
+ rte_pdump_stats;
+};
--
2.30.2
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v2 08/18] eal: fix typos in comments
@ 2021-09-09 18:10 4% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-09 18:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Minor spelling errors.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
lib/eal/include/rte_function_versioning.h | 2 +-
lib/eal/windows/include/fnmatch.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/lib/eal/include/rte_function_versioning.h b/lib/eal/include/rte_function_versioning.h
index 746a1e19923e..eb6dd2bc1727 100644
--- a/lib/eal/include/rte_function_versioning.h
+++ b/lib/eal/include/rte_function_versioning.h
@@ -15,7 +15,7 @@
/*
* Provides backwards compatibility when updating exported functions.
- * When a symol is exported from a library to provide an API, it also provides a
+ * When a symbol is exported from a library to provide an API, it also provides a
* calling convention (ABI) that is embodied in its name, return type,
* arguments, etc. On occasion that function may need to change to accommodate
* new functionality, behavior, etc. When that occurs, it is desirable to
diff --git a/lib/eal/windows/include/fnmatch.h b/lib/eal/windows/include/fnmatch.h
index 142753c3568d..c272f65ccdc3 100644
--- a/lib/eal/windows/include/fnmatch.h
+++ b/lib/eal/windows/include/fnmatch.h
@@ -30,7 +30,7 @@ extern "C" {
* with the given regular expression pattern.
*
* @param pattern
- * regular expression notation decribing the pattern to match
+ * regular expression notation describing the pattern to match
*
* @param string
* source string to searcg for the pattern
--
2.30.2
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH 08/18] eal: fix typos in comments
@ 2021-09-09 17:56 4% ` Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-09 17:56 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Minor spelling errors.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
lib/eal/include/rte_function_versioning.h | 2 +-
lib/eal/windows/include/fnmatch.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/lib/eal/include/rte_function_versioning.h b/lib/eal/include/rte_function_versioning.h
index 746a1e19923e..eb6dd2bc1727 100644
--- a/lib/eal/include/rte_function_versioning.h
+++ b/lib/eal/include/rte_function_versioning.h
@@ -15,7 +15,7 @@
/*
* Provides backwards compatibility when updating exported functions.
- * When a symol is exported from a library to provide an API, it also provides a
+ * When a symbol is exported from a library to provide an API, it also provides a
* calling convention (ABI) that is embodied in its name, return type,
* arguments, etc. On occasion that function may need to change to accommodate
* new functionality, behavior, etc. When that occurs, it is desirable to
diff --git a/lib/eal/windows/include/fnmatch.h b/lib/eal/windows/include/fnmatch.h
index 142753c3568d..c272f65ccdc3 100644
--- a/lib/eal/windows/include/fnmatch.h
+++ b/lib/eal/windows/include/fnmatch.h
@@ -30,7 +30,7 @@ extern "C" {
* with the given regular expression pattern.
*
* @param pattern
- * regular expression notation decribing the pattern to match
+ * regular expression notation describing the pattern to match
*
* @param string
* source string to searcg for the pattern
--
2.30.2
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v13 3/4] maintainers: add new abi scripts
2021-09-09 13:48 3% ` [dpdk-dev] [PATCH v13 0/4] devtools: scripts to count and track symbols Ray Kinsella
2021-09-09 13:48 5% ` [dpdk-dev] [PATCH v13 1/4] devtools: script to track symbols over releases Ray Kinsella
2021-09-09 13:48 5% ` [dpdk-dev] [PATCH v13 2/4] devtools: script to send notifications of expired symbols Ray Kinsella
@ 2021-09-09 13:48 19% ` Ray Kinsella
2 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-09-09 13:48 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr,
aconole, roy.fan.zhang, arkadiuszx.kusztal, gakhil
Add new abi management scripts to the MAINTAINERS file.
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
MAINTAINERS | 3 +++
1 file changed, 3 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 266f5ac1da..ae38af1b85 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -127,8 +127,11 @@ F: devtools/check-abi-version.sh
F: devtools/check-symbol-change.sh
F: devtools/gen-abi.sh
F: devtools/libabigail.abignore
+F: devtools/symboltool.abignore
F: devtools/update-abi.sh
F: devtools/update_version_map_abi.py
+F: devtools/notify-symbol-maintainers.py
+F: devtools/symbol-tool.py
F: buildtools/check-symbols.sh
F: buildtools/map-list-symbol.sh
F: drivers/*/*/*.map
--
2.26.2
^ permalink raw reply [relevance 19%]
* [dpdk-dev] [PATCH v13 2/4] devtools: script to send notifications of expired symbols
2021-09-09 13:48 3% ` [dpdk-dev] [PATCH v13 0/4] devtools: scripts to count and track symbols Ray Kinsella
2021-09-09 13:48 5% ` [dpdk-dev] [PATCH v13 1/4] devtools: script to track symbols over releases Ray Kinsella
@ 2021-09-09 13:48 5% ` Ray Kinsella
2021-09-09 13:48 19% ` [dpdk-dev] [PATCH v13 3/4] maintainers: add new abi scripts Ray Kinsella
2 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-09-09 13:48 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr,
aconole, roy.fan.zhang, arkadiuszx.kusztal, gakhil
Use this script with the output of the DPDK symbol tool, to notify
maintainers of expired symbols by email. You need to define the environment
variable DPDK_GETMAINTAINER_PATH for this tool to work.
Use terminal output to review the emails before sending.
e.g.
$ devtools/symbol-tool.py list-expired --format-output csv \
| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
devtools/notify_expired_symbols.py --format-output terminal
Then use email output to send the emails to the maintainers.
e.g.
$ devtools/symbol-tool.py list-expired --format-output csv \
| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
devtools/notify_expired_symbols.py --format-output email \
--smtp-server <server> --sender <someone@somewhere.com> \
--password <password> --cc <someone@somewhere.com>
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
devtools/notify-symbol-maintainers.py | 302 ++++++++++++++++++++++++++
1 file changed, 302 insertions(+)
create mode 100755 devtools/notify-symbol-maintainers.py
diff --git a/devtools/notify-symbol-maintainers.py b/devtools/notify-symbol-maintainers.py
new file mode 100755
index 0000000000..edf330f88b
--- /dev/null
+++ b/devtools/notify-symbol-maintainers.py
@@ -0,0 +1,302 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+# pylint: disable=invalid-name
+'''Tool to notify maintainers of expired symbols'''
+import os
+import smtplib
+import ssl
+import sys
+import subprocess
+import argparse
+from argparse import RawTextHelpFormatter
+import time
+from email.message import EmailMessage
+from pathlib import Path
+
+DESCRIPTION = '''
+Use this script with the output of the DPDK symbol tool, to notify maintainers
+and contributors of expired symbols by email. You need to define the environment
+variable DPDK_GETMAINTAINER_PATH for this tool to work.
+
+Use terminal output to review the emails before sending.
+e.g.
+$ devtools/symbol-tool.py list-expired --format-output csv \\
+| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
+{s} --format-output terminal
+
+Then use email output to send the emails to the maintainers.
+e.g.
+$ devtools/symbol-tool.py list-expired --format-output csv \\
+| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
+{s} --format-output email \\
+--smtp-server <server> --sender <someone@somewhere.com> --password <password> \\
+--cc <someone@somewhere.com>
+''' # noqa: E501
+
+EMAIL_TEMPLATE = '''Hi there,
+
+Please note the symbols listed below have expired. In line with the DPDK ABI
+policy, they should be scheduled for removal, in the next DPDK release.
+
+For more information, please see the DPDK ABI Policy, section 3.5.3.
+https://doc.dpdk.org/guides/contributing/abi_policy.html
+
+Thanks,
+
+The DPDK Symbol Bot
+
+''' # noqa: E501
+
+ABI_POLICY = 'doc/guides/contributing/abi_policy.rst'
+DPDK_GMP_ENV_VAR = 'DPDK_GETMAINTAINER_PATH'
+MAINTAINERS = 'MAINTAINERS'
+get_maintainer = ['devtools/get-maintainer.sh',
+ '--email', '-f']
+
+
+class EnvironException(Exception):
+ '''Subclass exception for Pylint\'s happiness.'''
+
+
+def _die_on_exception(e):
+ '''Print an exception, and quit'''
+
+ print('Fatal Error: ' + str(e))
+ sys.exit()
+
+
+def _check_get_maintainers_env():
+ '''Check get maintainers scripts are setup'''
+
+ if not Path(get_maintainer[0]).is_file():
+ raise EnvironException('Cannot locate DPDK\'s get maintainers script, '
+ ' usually at $' + get_maintainer[0] + '.')
+
+ if DPDK_GMP_ENV_VAR not in os.environ:
+ raise EnvironException(DPDK_GMP_ENV_VAR + ' is not defined.')
+
+ if not Path(os.environ[DPDK_GMP_ENV_VAR]).is_file():
+ raise EnvironException('Cannot locate get maintainers script, usually'
+ ' at ' + DPDK_GMP_ENV_VAR + '.')
+
+
+def _get_maintainers(libpath):
+ '''Get the maintainers for given library'''
+
+ try:
+ _check_get_maintainers_env()
+ except EnvironException as e:
+ _die_on_exception(e)
+
+ try:
+ cmd = get_maintainer + [libpath]
+ result = subprocess.run(cmd,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError as e:
+ _die_on_exception(e)
+
+ if result is None:
+ return None
+
+ email = result.stdout.decode('utf-8')
+ if email == '':
+ return None
+
+ email = list(filter(None, email.split('\n')))
+ return email
+
+
+default_maintainers = _get_maintainers(ABI_POLICY) + \
+ _get_maintainers(MAINTAINERS)
+
+
+def get_maintainers(libpath):
+ '''Get the maintainers for given library'''
+ maintainers = _get_maintainers(libpath)
+
+ if maintainers is None:
+ maintainers = default_maintainers
+
+ return maintainers
+
+
+def get_message(library, symbols, config):
+ '''Build email message from symbols, config and maintainers'''
+ contributors = {}
+ message = {}
+ maintainers = get_maintainers(library)
+
+ if maintainers != default_maintainers:
+ message['CC'] = default_maintainers.copy()
+
+ if 'CC' in config:
+ message.setdefault('CC', []).append(config['CC'])
+
+ message['Subject'] = 'Expired symbols in {}\n'.format(library)
+
+ body = EMAIL_TEMPLATE
+ body += '{:<50}{:<25}{:<25}\n'.format('Symbol', 'Contributor', 'Email')
+ for sym in symbols:
+ body += ('{:<50}{:<25}{:<25}\n'.format(sym,
+ symbols[sym]['name'],
+ symbols[sym]['email']))
+ email = symbols[sym]['email']
+ contributors[email] = ''
+
+ contributors = list(contributors.keys())
+
+ message['To'] = maintainers + contributors
+ message['Body'] = body
+
+ return message
+
+
+class OutputEmail():
+ '''Format the output for email'''
+
+ def __init__(self, config):
+ self.config = config
+
+ self.terminal = OutputTerminal(config)
+ context = ssl.create_default_context()
+
+ # Try to log in to server and send email
+ try:
+ self.server = smtplib.SMTP(config['smtp_server'], 587)
+ self.server.starttls(context=context) # Secure the connection
+ self.server.login(config['sender'], config['password'])
+ except EnvironException as e:
+ _die_on_exception(e)
+
+ def message(self, message):
+ '''send email'''
+ self.terminal.message(message)
+
+ msg = EmailMessage()
+ msg.set_content(message.pop('Body'))
+
+ for key in message.keys():
+ msg[key] = message[key]
+
+ msg['From'] = self.config['sender']
+ msg['Reply-To'] = 'no-reply@dpdk.org'
+
+ self.server.send_message(msg)
+
+ time.sleep(1)
+
+ def __del__(self):
+ self.server.quit()
+
+
+class OutputTerminal(): # pylint: disable=too-few-public-methods
+ '''Format the output for the terminal'''
+
+ def __init__(self, config):
+ self.config = config
+
+ def message(self, message):
+ '''Print email to terminal'''
+
+ terminal = 'To:' + ', '.join(message['To']) + '\n'
+ if 'sender' in self.config.keys():
+ terminal += 'From:' + self.config['sender'] + '\n'
+
+ terminal += 'Reply-To:' + 'no-reply@dpdk.org' + '\n'
+
+ if 'CC' in message:
+ terminal += 'CC:' + ', '.join(message['CC']) + '\n'
+
+ terminal += 'Subject:' + message['Subject'] + '\n'
+ terminal += 'Body:' + message['Body'] + '\n'
+
+ print(terminal)
+ print('-' * 80)
+
+
+def parse_config(args):
+ '''put the command line args in the right places'''
+ config = {}
+ error_msg = None
+
+ outputs = {
+ None: OutputTerminal,
+ 'terminal': OutputTerminal,
+ 'email': OutputEmail
+ }
+
+ if args.format_output == 'email':
+ if args.smtp_server is None:
+ error_msg = 'SMTP server'
+ else:
+ config['smtp_server'] = args.smtp_server
+
+ if args.sender is None:
+ error_msg = 'sender'
+ else:
+ config['sender'] = args.sender
+
+ if args.password is None:
+ error_msg = 'password'
+ else:
+ config['password'] = args.password
+
+ if args.cc is not None:
+ config['CC'] = args.cc
+
+ if error_msg is not None:
+ print('Please specify a {} for email output'.format(error_msg))
+ return None
+
+ config['output'] = outputs[args.format_output]
+ return config
+
+
+def main():
+ '''Main entry point'''
+ parser = \
+ argparse.ArgumentParser(description=DESCRIPTION.format(s=__file__),
+ formatter_class=RawTextHelpFormatter)
+ parser.add_argument('--format-output',
+ choices=['terminal', 'email'],
+ default='terminal')
+ parser.add_argument('--smtp-server')
+ parser.add_argument('--password')
+ parser.add_argument('--sender')
+ parser.add_argument('--cc')
+
+ args = parser.parse_args()
+ config = parse_config(args)
+ if config is None:
+ return
+
+ symbols = {}
+ lastlib = library = ''
+
+ output = config['output'](config)
+
+ for line in sys.stdin:
+ line = line.rstrip('\n')
+
+ if line.find('mapfile') >= 0:
+ continue
+ library, symbol, name, email = line.split(',')
+
+ if library != lastlib:
+ message = get_message(lastlib, symbols, config)
+ output.message(message)
+ symbols = {}
+
+ lastlib = library
+ symbols[symbol] = {'name': name, 'email': email}
+
+ # print the last library
+ message = get_message(lastlib, symbols, config)
+ output.message(message)
+
+
+if __name__ == '__main__':
+ main()
--
2.26.2
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v13 1/4] devtools: script to track symbols over releases
2021-09-09 13:48 3% ` [dpdk-dev] [PATCH v13 0/4] devtools: scripts to count and track symbols Ray Kinsella
@ 2021-09-09 13:48 5% ` Ray Kinsella
2021-09-09 13:48 5% ` [dpdk-dev] [PATCH v13 2/4] devtools: script to send notifications of expired symbols Ray Kinsella
2021-09-09 13:48 19% ` [dpdk-dev] [PATCH v13 3/4] maintainers: add new abi scripts Ray Kinsella
2 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-09-09 13:48 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr,
aconole, roy.fan.zhang, arkadiuszx.kusztal, gakhil
This script tracks the growth of stable and experimental symbols
over releases since v19.11. The script has the ability to
count the added symbols between two dpdk releases, and to
list experimental symbols present in two dpdk releases
(expired symbols).
example usages:
Count symbols added since v19.11
$ devtools/symbol-tool.py count-symbols
Count symbols added since v20.11
$ devtools/symbol-tool.py count-symbols --releases v20.11,v21.05
List experimental symbols present in v20.11 and v21.05
$ devtools/symbol-tool.py list-expired --releases v20.11,v21.05
List experimental symbols in libraries only, present since v19.11
$ devtools/symbol-tool.py list-expired --directory lib
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
devtools/symbol-tool.py | 566 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 566 insertions(+)
create mode 100755 devtools/symbol-tool.py
diff --git a/devtools/symbol-tool.py b/devtools/symbol-tool.py
new file mode 100755
index 0000000000..a77d8b2442
--- /dev/null
+++ b/devtools/symbol-tool.py
@@ -0,0 +1,566 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+# pylint: disable=invalid-name
+'''Tool to count or list symbols in each DPDK release'''
+from pathlib import Path
+import sys
+import os
+import subprocess
+import argparse
+from argparse import RawTextHelpFormatter
+import re
+import datetime
+try:
+ from parsley import makeGrammar
+except ImportError:
+ print('This script uses the package Parsley to parse C Mapfiles.\n'
+ 'This can be installed with \"pip install parsley".')
+ sys.exit()
+
+DESCRIPTION = '''
+This script tracks the growth of stable and experimental symbols
+over releases since v19.11. The script has the ability to
+count the added symbols between two dpdk releases, and to
+list experimental symbols present in two dpdk releases
+(expired symbols), including the name & email of the original contributor.
+
+example usages:
+
+Count symbols added since v19.11
+$ {s} count-symbols
+
+Count symbols added since v20.11
+$ {s} count-symbols --releases v20.11,v21.05
+
+List experimental symbols present in v20.11 and v21.05
+$ {s} list-expired --releases v20.11,v21.05
+
+List experimental symbols in libraries only, present since v19.11
+$ {s} list-expired --directory lib
+'''
+
+MAP_GRAMMAR = r"""
+
+ws = (' ' | '\r' | '\n' | '\t')*
+
+ABI_VER = ({})
+DPDK_VER = ('DPDK_' ABI_VER)
+ABI_NAME = ('INTERNAL' | 'EXPERIMENTAL' | DPDK_VER)
+comment = '#' (~'\n' anything)+ '\n'
+symbol = (~(';' | '}}' | '#') anything )+:c ';' -> ''.join(c)
+global = 'global:'
+local = 'local: *;'
+symbols = comment* symbol:s ws comment* -> s
+
+abi = (abi_section+):m -> dict(m)
+abi_section = (ws ABI_NAME:e ws '{{' ws global* (~local ws symbols)*:s ws local* ws '}}' ws DPDK_VER* ';' ws) -> (e,s)
+""" # noqa: E501
+
+
+class EnvironException(Exception):
+ '''Subclass exception for Pylint\'s happiness.'''
+
+
+def get_abi_versions():
+ '''Returns a string of possible dpdk abi versions'''
+
+ year = datetime.date.today().year - 2000
+ tags = " |".join(['\'{}\''.format(i)
+ for i in reversed(range(21, year + 1))])
+ tags = tags + ' | \'20.0.1\' | \'20.0\' | \'20\''
+
+ return tags
+
+
+def get_dpdk_releases():
+ '''Returns a list of dpdk release tags names since v19.11'''
+
+ year = datetime.date.today().year - 2000
+ year_range = "|".join("{}".format(i) for i in range(19, year + 1))
+ pattern = re.compile(r'^\"v(' + year_range + r')\.\d{2}\"$')
+
+ cmd = ['git', 'for-each-ref', '--sort=taggerdate', '--format', '"%(tag)"']
+ try:
+ result = subprocess.run(cmd,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ print("Failed to interogate git for release tags")
+ sys.exit()
+
+ tags = result.stdout.decode('utf-8').split('\n')
+
+ # find the non-rcs between now and v19.11
+ tags = [tag.replace('\"', '')
+ for tag in reversed(tags)
+ if pattern.match(tag)][:-3]
+
+ return tags
+
+
+def fix_directory_name(path):
+ '''Prepend librte to the source directory name'''
+ mapfilepath1 = str(path.parent.name)
+ mapfilepath2 = str(path.parents[1])
+ mapfilepath = mapfilepath2 + '/librte_' + mapfilepath1
+
+ return mapfilepath
+
+
+def directory_renamed(path, rel):
+ '''Fix removal of the librte_ from the directory names'''
+
+ mapfilepath = fix_directory_name(path)
+ tagfile = '{}:{}/{}'.format(rel, mapfilepath, path.name)
+
+ try:
+ result = subprocess.run(['git', 'show', tagfile],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ result = None
+
+ return result
+
+
+def mapfile_renamed(path, rel):
+ '''Fix renaming of the map file'''
+ newfile = None
+
+ result = subprocess.run(['git', 'ls-tree',
+ rel, str(path.parent) + '/'],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ dentries = result.stdout.decode('utf-8')
+ dentries = dentries.split('\n')
+
+ # filter entries looking for the map file
+ dentries = [dentry for dentry in dentries if dentry.endswith('.map')]
+ if len(dentries) > 1 or len(dentries) == 0:
+ return None
+
+ dparts = dentries[0].split('/')
+ newfile = dparts[len(dparts) - 1]
+
+ if newfile is not None:
+ tagfile = '{}:{}/{}'.format(rel, path.parent, newfile)
+
+ try:
+ result = subprocess.run(['git', 'show', tagfile],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ result = None
+
+ else:
+ result = None
+
+ return result
+
+
+def mapfile_and_directory_renamed(path, rel):
+ '''Fix renaming of the map file & the source directory'''
+ mapfilepath = Path("{}/{}".format(fix_directory_name(path), path.name))
+
+ return mapfile_renamed(mapfilepath, rel)
+
+
+FIX_STRATEGIES = [directory_renamed,
+ mapfile_renamed,
+ mapfile_and_directory_renamed]
+
+
+def get_symbols(map_parser, release, mapfile_path):
+ '''Count the symbols for a given release and mapfile'''
+ abi_sections = {}
+
+ tagfile = '{}:{}'.format(release, mapfile_path)
+ try:
+ result = subprocess.run(['git', 'show', tagfile],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ result = None
+
+ for fix_strategy in FIX_STRATEGIES:
+ if result is not None:
+ break
+ result = fix_strategy(mapfile_path, release)
+
+ if result is not None:
+ mapfile = result.stdout.decode('utf-8')
+ abi_sections = map_parser(mapfile).abi()
+
+ return abi_sections
+
+
+def get_terminal_rows():
+ '''Find the number of rows in the terminal'''
+
+ try:
+ return os.get_terminal_size().lines
+ except IOError:
+ return 0
+
+
+class IgnoredSymbols(): # pylint: disable=too-few-public-methods
+ '''Symbols which are to be ignored for some period'''
+
+ SYMBOL_TOOL_IGNORE = 'devtools/symboltool.abignore'
+ ignore_regex = []
+ __initialized = False
+
+ @staticmethod
+ def initialize():
+ '''initialize once'''
+
+ if IgnoredSymbols.__initialized:
+ return
+ IgnoredSymbols.__initialized = True
+
+ if 'DPDK_SYMBOL_TOOL_IGNORE' in os.environ:
+ IgnoredSymbols.SYMBOL_TOOL_IGNORE = \
+ os.environ['DPDK_SYMBOL_TOOL_IGNORE']
+
+ # if the user specifies an ignore file, we can't find then error.
+ if not Path(IgnoredSymbols.SYMBOL_TOOL_IGNORE).is_file():
+ raise EnvironException('Cannot locate {}\'s '
+ 'ignore file'.format(__file__))
+
+ # if we cannot find the default ignore file, then continue
+ if not Path(IgnoredSymbols.SYMBOL_TOOL_IGNORE).is_file():
+ return
+
+ lines = open(Path(IgnoredSymbols.SYMBOL_TOOL_IGNORE)).readlines()
+ for line in lines:
+
+ line = line.strip()
+
+ # ignore comments and whitespace
+ if line.startswith(';') or len(line) == 0:
+ continue
+
+ IgnoredSymbols.ignore_regex.append(re.compile(line))
+
+ def __init__(self):
+ self.initialize()
+
+ def check_ignore(self, symbol):
+ '''Check symbol against the ignore regexes'''
+
+ for exp in self.ignore_regex:
+ if exp.search(symbol) is not None:
+ return True
+
+ return False
+
+
+class SymbolOwner():
+ '''Find the symbols original contributors name and email'''
+ symbol_regex = {}
+ blame_regex = {'name': r'author\s(.*)',
+ 'email': r'author-mail\s<(.*)>'}
+
+ def __init__(self, libpath, symbol):
+ self.libpath = libpath
+ self.symbol = symbol
+
+ # find variable definitions in C files, and functions in headers.
+ self.symbol_regex = \
+ {'*.c': r'^(?!extern).*' + self.symbol + '[^()]*;',
+ '*.h': r'__rte_experimental(?:.*\n){0,2}.*' + self.symbol}
+
+ def find_symbol_location(self):
+ '''Find where the symbol is definited in the source'''
+ for key in self.symbol_regex:
+ for path in Path(self.libpath).rglob(key):
+ file_text = open(path).read()
+
+ # find where the symbol is defined, either preceded by
+ # rte_experimental tag (functions)
+ # or followed by a ; (variables)
+
+ exp = self.symbol_regex[key]
+ pattern = re.compile(exp, re.MULTILINE)
+ search = pattern.search(file_text)
+
+ if search is not None:
+ symbol_pos = search.span()[1]
+ symbol_line = file_text.count('\n', 0, symbol_pos) + 1
+
+ return [str(path), symbol_line]
+ return None
+
+ def find_symbol_owner(self):
+ '''Find the symbols original contributors name and email'''
+ owners = {}
+ location = self.find_symbol_location()
+
+ if location is None:
+ return None
+
+ line = '-L {},{}'.format(location[1], location[1])
+ # git blame -p(orcelain) -L(ine) path
+ args = ['-p', line, location[0]]
+
+ try:
+ result = subprocess.run(['git', 'blame'] + args,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ return None
+
+ blame = result.stdout.decode('utf-8')
+ for key in self.blame_regex:
+ pattern = re.compile(self.blame_regex[key], re.MULTILINE)
+ match = pattern.search(blame)
+
+ owners[key] = match.groups()[0]
+
+ return owners
+
+
+class SymbolCountOutput():
+ '''Format the output to supported formats'''
+ output_fmt = ""
+ column_fmt = ""
+
+ def __init__(self, format_output, dpdk_releases):
+ self.OUTPUT_FORMATS[format_output](self, dpdk_releases)
+ self.column_titles = ['mapfile'] + dpdk_releases
+
+ self.terminal_rows = get_terminal_rows()
+ self.row = 0
+
+ def set_terminal_output(self, dpdk_rel):
+ '''Set the output format to Tabbed Separated Values'''
+
+ self.output_fmt = '{:<50}' + \
+ ''.join(['{:<6}{:<6}'] * (len(dpdk_rel)))
+ self.column_fmt = '{:50}' + \
+ ''.join(['{:<12}'] * (len(dpdk_rel)))
+
+ def set_csv_output(self, dpdk_rel):
+ '''Set the output format to Comma Separated Values'''
+
+ self.output_fmt = '{},' + \
+ ','.join(['{},{}'] * (len(dpdk_rel)))
+ self.column_fmt = '{},' + \
+ ','.join(['{},'] * (len(dpdk_rel)))
+
+ def print_columns(self):
+ '''Print column rows with release names'''
+ print(self.column_fmt.format(*self.column_titles))
+ self.row += 1
+
+ def print_row(self, mapfile, symbols):
+ '''Print row of symbol values'''
+ mapfile = str(mapfile)
+ print(self.output_fmt.format(*([mapfile] + symbols)))
+ self.row += 1
+
+ if((self.terminal_rows > 0) and
+ ((self.row % self.terminal_rows) == 0)):
+ self.print_columns()
+
+ OUTPUT_FORMATS = {None: set_terminal_output,
+ 'terminal': set_terminal_output,
+ 'csv': set_csv_output}
+
+
+class ListExpiredOutput():
+ '''Format the output to supported formats'''
+ output_fmt = ""
+ column_fmt = ""
+
+ def __init__(self, format_output, dpdk_releases):
+ self.terminal = True
+ self.OUTPUT_FORMATS[format_output](self, dpdk_releases)
+ self.column_titles = ['mapfile'] + \
+ ['expired (' + ','.join(dpdk_releases) + ')'] + \
+ ['contributor name', 'contributor email']
+
+ def set_terminal_output(self, _):
+ '''Set the output format to Tabbed Separated Values'''
+
+ self.output_fmt = '{:<50}{:<50}{:<25}{:<25}'
+ self.column_fmt = '{:50}{:50}{:25}{:25}'
+
+ def set_csv_output(self, _):
+ '''Set the output format to Comma Separated Values'''
+
+ self.output_fmt = '{},{},{},{}'
+ self.column_fmt = '{},{},{},{}'
+ self.terminal = False
+
+ def print_columns(self):
+ '''Print column rows with release names'''
+ print(self.column_fmt.format(*self.column_titles))
+
+ def print_row(self, mapfile, symbols, owner):
+ '''Print row of symbol values'''
+
+ for symbol in symbols:
+ mapfile = str(mapfile)
+ name = owner[symbol]['name'] \
+ if owner[symbol] is not None else ''
+ email = owner[symbol]['email'] \
+ if owner[symbol] is not None else ''
+
+ print(self.output_fmt.format(mapfile, symbol, name, email))
+ if self.terminal:
+ mapfile = ''
+
+ OUTPUT_FORMATS = {None: set_terminal_output,
+ 'terminal': set_terminal_output,
+ 'csv': set_csv_output}
+
+
+class CountSymbolsAction:
+ ''' Logic to count symbols added since a give release '''
+ IGNORE_SECTIONS = ['EXPERIMENTAL', 'INTERNAL']
+
+ def __init__(self, mapfile_path, mapfile_parser, format_output):
+ self.path = mapfile_path
+ self.parser = mapfile_parser
+ self.format_output = format_output
+ self.symbols_count = []
+
+ def add_mapfile(self, release):
+ ''' add a version mapfile '''
+ symbol_count = experimental_count = 0
+
+ symbols = get_symbols(self.parser, release, self.path)
+
+ # which versions are present, and we care about
+ abi_vers = [abi_ver
+ for abi_ver in symbols
+ if abi_ver not in self.IGNORE_SECTIONS]
+
+ for abi_ver in abi_vers:
+ symbol_count += len(symbols[abi_ver])
+
+ # count experimental symbols
+ if 'EXPERIMENTAL' in symbols.keys():
+ experimental_count = len(symbols['EXPERIMENTAL'])
+
+ self.symbols_count += [symbol_count, experimental_count]
+
+ def __del__(self):
+ self.format_output.print_row(self.path.parent, self.symbols_count)
+
+
+class ListExpiredAction:
+ ''' Logic to list expired symbols between two releases '''
+
+ def __init__(self, mapfile_path, mapfile_parser, format_output):
+ self.path = mapfile_path
+ self.parser = mapfile_parser
+ self.format_output = format_output
+ self.experimental_symbols = []
+ self.ignored_symbols = IgnoredSymbols()
+
+ def add_mapfile(self, release):
+ ''' add a version mapfile '''
+ symbols = get_symbols(self.parser, release, self.path)
+
+ if 'EXPERIMENTAL' in symbols.keys():
+ experimental = [exp.strip() for exp in symbols['EXPERIMENTAL']]
+
+ self.experimental_symbols.append(experimental)
+
+ def __del__(self):
+ if len(self.experimental_symbols) != 2:
+ return
+
+ tmp = self.experimental_symbols
+ # find symbols present in both dpdk releases
+ intersect_syms = [sym for sym in tmp[0] if sym in tmp[1]]
+
+ # remove ignored symbols
+ intersect_syms = [sym for sym in intersect_syms if not
+ self.ignored_symbols.check_ignore(sym)]
+
+ # check for empty set
+ if intersect_syms == []:
+ return
+
+ sym_owner = {}
+ for sym in intersect_syms:
+ sym_owner[sym] = \
+ SymbolOwner(self.path.parent, sym).find_symbol_owner()
+
+ self.format_output.print_row(self.path.parent,
+ intersect_syms,
+ sym_owner)
+
+
+SRC_DIRECTORIES = 'drivers,lib'
+
+ACTIONS = {None: CountSymbolsAction,
+ 'count-symbols': CountSymbolsAction,
+ 'list-expired': ListExpiredAction}
+
+ACTION_OUTPUT = {None: SymbolCountOutput,
+ 'count-symbols': SymbolCountOutput,
+ 'list-expired': ListExpiredOutput}
+
+
+def main():
+ '''Main entry point'''
+
+ dpdk_releases = get_dpdk_releases()
+
+ parser = \
+ argparse.ArgumentParser(description=DESCRIPTION.format(s=__file__),
+ formatter_class=RawTextHelpFormatter)
+
+ parser.add_argument('mode', choices=['count-symbols', 'list-expired'])
+ parser.add_argument('--format-output', choices=['terminal', 'csv'],
+ default='terminal')
+ parser.add_argument('--directory', choices=SRC_DIRECTORIES.split(','),
+ default=SRC_DIRECTORIES)
+ parser.add_argument('--releases',
+ help='2 x comma separated release tags e.g. \''
+ + ','.join([dpdk_releases[0], dpdk_releases[-1]])
+ + '\'')
+ args = parser.parse_args()
+
+ if args.releases is not None:
+ dpdk_releases = args.releases.split(',')
+
+ if args.mode == 'list-expired':
+ if len(dpdk_releases) < 2:
+ sys.exit('Please specify two releases to compare '
+ 'in \'list-expired\' mode.')
+ dpdk_releases = [dpdk_releases[0],
+ dpdk_releases[len(dpdk_releases) - 1]]
+
+ action = ACTIONS[args.mode]
+ format_output = ACTION_OUTPUT[args.mode](args.format_output, dpdk_releases)
+
+ map_grammar = MAP_GRAMMAR.format(get_abi_versions())
+ map_parser = makeGrammar(map_grammar, {})
+
+ format_output.print_columns()
+
+ for src_dir in args.directory.split(','):
+ for path in Path(src_dir).rglob('*.map'):
+ release_action = action(path, map_parser, format_output)
+
+ for release in dpdk_releases:
+ release_action.add_mapfile(release)
+
+ # all the magic happens in the destructor
+ del release_action
+
+
+if __name__ == '__main__':
+ main()
--
2.26.2
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v13 0/4] devtools: scripts to count and track symbols
` (3 preceding siblings ...)
2021-09-08 15:13 3% ` [dpdk-dev] [PATCH v12 0/4] devtools: scripts to count and track symbols Ray Kinsella
@ 2021-09-09 13:48 3% ` Ray Kinsella
2021-09-09 13:48 5% ` [dpdk-dev] [PATCH v13 1/4] devtools: script to track symbols over releases Ray Kinsella
` (2 more replies)
4 siblings, 3 replies; 200+ results
From: Ray Kinsella @ 2021-09-09 13:48 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr,
aconole, roy.fan.zhang, arkadiuszx.kusztal, gakhil
Scripts to count and track the lifecycle of DPDK symbols.
The symbol-tool script reports on the growth of symbols over releases
and list expired symbols. The notify-symbol-maintainers script
consumes the input from symbol-tool and generates email notifications
of expired symbols.
v2: reworked to fix pylint errors
v3: sent with the correct in-reply-to
v4: fix typos picked up by the CI
v5: fix terminal_size & directory args
v6: added list-expired, to list expired experimental symbols
v7: fix typo in comments
v8: added tool to notify maintainers of expired symbols
v9: removed hardcoded emails addressed and script names
v10: added ability to identify and notify the original contributors
v11: addressed feedback from Aaron Conole, including PEP8 errors.
v12: added symbol-tool ignore functionality, to ignore specific symbols
v13: renamed symboltool.abignore, typos, added ack from Akhil Goyal
Ray Kinsella (4):
devtools: script to track symbols over releases
devtools: script to send notifications of expired symbols
maintainers: add new abi scripts
devtools: add asym crypto to symbol-tool ignore
MAINTAINERS | 3 +
devtools/notify-symbol-maintainers.py | 302 ++++++++++++++
devtools/symbol-tool.py | 566 ++++++++++++++++++++++++++
devtools/symboltool.abignore | 3 +
4 files changed, 874 insertions(+)
create mode 100755 devtools/notify-symbol-maintainers.py
create mode 100755 devtools/symbol-tool.py
create mode 100644 devtools/symboltool.abignore
--
2.26.2
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH 0/2] eventdev: add port usage hints
@ 2021-09-09 12:54 3% Harry van Haaren
0 siblings, 0 replies; 200+ results
From: Harry van Haaren @ 2021-09-09 12:54 UTC (permalink / raw)
To: dev; +Cc: pravin.pathak, timothy.mcdaniel, Harry van Haaren
These 2 patches are a suggestion to add a hint to the
struct rte_event_port_conf.event_port_cfg.
The usage of these hints is to allow an application to
identify/communicate to the PMD what ports will primarily
serve what purpose.
E.g, some ports are "mainly producers" in that they are
usually polling Ethdev RXQs (or other event sources..)
and enqueue the resulting events to the eventdev instance.
Similarly there are usages for "worker" (mainly forwards
events) and "consumer" (mainly consumes events without re-enq).
Note that these flags are *hints* only, and *functionally*
any combination of (NEW/FWD/RELEASE) is still allowed by
any port. The reason to add these is to allow a PMD to allocate
internal resource more efficiently.
Note that this implementation does not change the ABI,
as it gives a purpose to existing bits in an existing field.
Regards, -Harry
Harry van Haaren (2):
lib/eventdev: add usage hints to port configure API
examples/eventdev_pipeline: use port config hints
.../pipeline_worker_generic.c | 2 ++
.../eventdev_pipeline/pipeline_worker_tx.c | 2 ++
lib/eventdev/rte_eventdev.h | 23 +++++++++++++++++++
3 files changed, 27 insertions(+)
--
2.30.2
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v5 8/9] doc: changes for new pcapng and dumpcap
2021-09-08 21:50 1% ` [dpdk-dev] [PATCH v5 6/9] pdump: support pcapng and filtering Stephen Hemminger
@ 2021-09-08 21:50 1% ` Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-08 21:50 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Describe the new packet capture library and utilities
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
.../howto/img/packet_capture_framework.svg | 96 +++++++++----------
doc/guides/howto/packet_capture_framework.rst | 67 ++++++-------
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/pcapng_lib.rst | 24 +++++
doc/guides/prog_guide/pdump_lib.rst | 28 ++++--
doc/guides/rel_notes/release_21_11.rst | 10 ++
doc/guides/tools/dumpcap.rst | 86 +++++++++++++++++
doc/guides/tools/index.rst | 1 +
10 files changed, 228 insertions(+), 87 deletions(-)
create mode 100644 doc/guides/prog_guide/pcapng_lib.rst
create mode 100644 doc/guides/tools/dumpcap.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 1992107a0356..ee07394d1c78 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -223,3 +223,4 @@ The public API headers are grouped by topics:
[experimental APIs] (@ref rte_compat.h),
[ABI versioning] (@ref rte_function_versioning.h),
[version] (@ref rte_version.h)
+ [pcapng] (@ref rte_pcapng.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 325a0195c6ab..aba17799a9a1 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -58,6 +58,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/metrics \
@TOPDIR@/lib/node \
@TOPDIR@/lib/net \
+ @TOPDIR@/lib/pcapng \
@TOPDIR@/lib/pci \
@TOPDIR@/lib/pdump \
@TOPDIR@/lib/pipeline \
diff --git a/doc/guides/howto/img/packet_capture_framework.svg b/doc/guides/howto/img/packet_capture_framework.svg
index a76baf71fdee..1c2646a81096 100644
--- a/doc/guides/howto/img/packet_capture_framework.svg
+++ b/doc/guides/howto/img/packet_capture_framework.svg
@@ -1,6 +1,4 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
<svg
xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
xmlns:dc="http://purl.org/dc/elements/1.1/"
@@ -16,8 +14,8 @@
viewBox="0 0 425.19685 283.46457"
id="svg2"
version="1.1"
- inkscape:version="0.91 r13725"
- sodipodi:docname="drawing-pcap.svg">
+ inkscape:version="1.0.2 (e86c870879, 2021-01-15)"
+ sodipodi:docname="packet_capture_framework.svg">
<defs
id="defs4">
<marker
@@ -228,7 +226,7 @@
x2="487.64606"
y2="258.38232"
gradientUnits="userSpaceOnUse"
- gradientTransform="translate(-84.916417,744.90779)" />
+ gradientTransform="matrix(1.1457977,0,0,0.99944907,-151.97019,745.05014)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient5784"
@@ -277,17 +275,18 @@
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
- inkscape:zoom="0.57434918"
- inkscape:cx="215.17857"
- inkscape:cy="285.26445"
+ inkscape:zoom="1"
+ inkscape:cx="226.77165"
+ inkscape:cy="78.124511"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
- inkscape:window-width="1874"
- inkscape:window-height="971"
- inkscape:window-x="2"
- inkscape:window-y="24"
- inkscape:window-maximized="0" />
+ inkscape:window-width="2560"
+ inkscape:window-height="1414"
+ inkscape:window-x="0"
+ inkscape:window-y="0"
+ inkscape:window-maximized="1"
+ inkscape:document-rotation="0" />
<metadata
id="metadata7">
<rdf:RDF>
@@ -296,7 +295,7 @@
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
- <dc:title></dc:title>
+ <dc:title />
</cc:Work>
</rdf:RDF>
</metadata>
@@ -321,15 +320,15 @@
y="790.82452" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="61.050636"
y="807.3205"
- id="text4152"
- sodipodi:linespacing="125%"><tspan
+ id="text4152"><tspan
sodipodi:role="line"
id="tspan4154"
x="61.050636"
- y="807.3205">DPDK Primary Application</tspan></text>
+ y="807.3205"
+ style="font-size:12.5px;line-height:1.25">DPDK Primary Application</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6"
@@ -339,19 +338,20 @@
y="827.01843" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="350.68585"
y="841.16058"
- id="text4189"
- sodipodi:linespacing="125%"><tspan
+ id="text4189"><tspan
sodipodi:role="line"
id="tspan4191"
x="350.68585"
- y="841.16058">dpdk-pdump</tspan><tspan
+ y="841.16058"
+ style="font-size:12.5px;line-height:1.25">dpdk-dumpcap</tspan><tspan
sodipodi:role="line"
x="350.68585"
y="856.78558"
- id="tspan4193">tool</tspan></text>
+ id="tspan4193"
+ style="font-size:12.5px;line-height:1.25">tool</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4"
@@ -361,15 +361,15 @@
y="891.16315" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="352.70612"
y="905.3053"
- id="text4189-1"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1"><tspan
sodipodi:role="line"
x="352.70612"
y="905.3053"
- id="tspan4193-3">PCAP PMD</tspan></text>
+ id="tspan4193-3"
+ style="font-size:12.5px;line-height:1.25">librte_pcapng</tspan></text>
<rect
style="fill:url(#linearGradient5745);fill-opacity:1;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-6"
@@ -379,15 +379,15 @@
y="923.9931" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="136.02846"
y="938.13525"
- id="text4189-0"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-0"><tspan
sodipodi:role="line"
x="136.02846"
y="938.13525"
- id="tspan4193-6">dpdk_port0</tspan></text>
+ id="tspan4193-6"
+ style="font-size:12.5px;line-height:1.25">dpdk_port0</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-5"
@@ -397,33 +397,33 @@
y="824.99817" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="137.54369"
y="839.14026"
- id="text4189-4"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-4"><tspan
sodipodi:role="line"
x="137.54369"
y="839.14026"
- id="tspan4193-2">librte_pdump</tspan></text>
+ id="tspan4193-2"
+ style="font-size:12.5px;line-height:1.25">librte_pdump</tspan></text>
<rect
- style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+ style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1.07013;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4-5"
- width="94.449265"
- height="35.355339"
- x="307.7804"
- y="985.61243" />
+ width="108.21974"
+ height="35.335861"
+ x="297.9809"
+ y="985.62219" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="352.70618"
y="999.75458"
- id="text4189-1-8"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1-8"><tspan
sodipodi:role="line"
x="352.70618"
y="999.75458"
- id="tspan4193-3-2">capture.pcap</tspan></text>
+ id="tspan4193-3-2"
+ style="font-size:12.5px;line-height:1.25">capture.pcapng</tspan></text>
<rect
style="fill:url(#linearGradient5788-1);fill-opacity:1;stroke:#257cdc;stroke-width:1.12555885;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4-5-1"
@@ -433,15 +433,15 @@
y="983.14984" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="136.53352"
y="1002.785"
- id="text4189-1-8-4"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1-8-4"><tspan
sodipodi:role="line"
x="136.53352"
y="1002.785"
- id="tspan4193-3-2-7">Traffic Generator</tspan></text>
+ id="tspan4193-3-2-7"
+ style="font-size:12.5px;line-height:1.25">Traffic Generator</tspan></text>
<path
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker7331)"
d="m 351.46948,927.02357 c 0,57.5787 0,57.5787 0,57.5787"
diff --git a/doc/guides/howto/packet_capture_framework.rst b/doc/guides/howto/packet_capture_framework.rst
index c31bac52340e..78baa609a021 100644
--- a/doc/guides/howto/packet_capture_framework.rst
+++ b/doc/guides/howto/packet_capture_framework.rst
@@ -1,18 +1,19 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright(c) 2017 Intel Corporation.
-DPDK pdump Library and pdump Tool
-=================================
+DPDK packet capture libraries and tools
+=======================================
This document describes how the Data Plane Development Kit (DPDK) Packet
Capture Framework is used for capturing packets on DPDK ports. It is intended
for users of DPDK who want to know more about the Packet Capture feature and
for those who want to monitor traffic on DPDK-controlled devices.
-The DPDK packet capture framework was introduced in DPDK v16.07. The DPDK
-packet capture framework consists of the DPDK pdump library and DPDK pdump
-tool.
-
+The DPDK packet capture framework was introduced in DPDK v16.07 and
+enhanced in 21.1. The DPDK packet capture framework consists of the
+libraries for collecting packets ``librte_pdump`` and writing packets
+to a file ``librte_pcapng``. There are two sample applications:
+``dpdk-dumpcap`` and older ``dpdk-pdump``.
Introduction
------------
@@ -22,43 +23,46 @@ allow users to initialize the packet capture framework and to enable or
disable packet capture. The library works on a multi process communication model and its
usage is recommended for debugging purposes.
-The :ref:`dpdk-pdump <pdump_tool>` tool is developed based on the
-``librte_pdump`` library. It runs as a DPDK secondary process and is capable
-of enabling or disabling packet capture on DPDK ports. The ``dpdk-pdump`` tool
-provides command-line options with which users can request enabling or
-disabling of the packet capture on DPDK ports.
+The :ref:`librte_pcapng <pcapng_library>` library provides the APIs to format
+packets and write them to a file in Pcapng format.
+
+
+The :ref:`dpdk-dumpcap <dumpcap_tool>` is a tool that captures packets in
+like Wireshark dumpcap does for Linux. It runs as a DPDK secondary process and
+captures packets from one or more interfaces and writes them to a file
+in Pcapng format. The ``dpdk-dumpcap`` tool is designed to take
+most of the same options as the Wireshark ``dumpcap`` command.
-The application which initializes the packet capture framework will be a primary process
-and the application that enables or disables the packet capture will
-be a secondary process. The primary process sends the Rx and Tx packets from the DPDK ports
-to the secondary process.
+Without any options it will use the packet capture framework to
+capture traffic from the first available DPDK port.
In DPDK the ``testpmd`` application can be used to initialize the packet
-capture framework and acts as a server, and the ``dpdk-pdump`` tool acts as a
+capture framework and acts as a server, and the ``dpdk-dumpcap`` tool acts as a
client. To view Rx or Tx packets of ``testpmd``, the application should be
-launched first, and then the ``dpdk-pdump`` tool. Packets from ``testpmd``
-will be sent to the tool, which then sends them on to the Pcap PMD device and
-that device writes them to the Pcap file or to an external interface depending
-on the command-line option used.
+launched first, and then the ``dpdk-dumpcap`` tool. Packets from ``testpmd``
+will be sent to the tool, and then to the Pcapng file.
Some things to note:
-* The ``dpdk-pdump`` tool can only be used in conjunction with a primary
+* All tools using ``librte_pdump`` can only be used in conjunction with a primary
application which has the packet capture framework initialized already. In
dpdk, only ``testpmd`` is modified to initialize packet capture framework,
- other applications remain untouched. So, if the ``dpdk-pdump`` tool has to
+ other applications remain untouched. So, if the ``dpdk-dumpcap`` tool has to
be used with any application other than the testpmd, the user needs to
explicitly modify that application to call the packet capture framework
initialization code. Refer to the ``app/test-pmd/testpmd.c`` code and look
for ``pdump`` keyword to see how this is done.
-* The ``dpdk-pdump`` tool depends on the libpcap based PMD.
+* The ``dpdk-pdump`` tool is an older tool created as demonstration of ``librte_pdump``
+ library. The ``dpdk-pdump`` tool provides more limited functionality and
+ and depends on the Pcap PMD. It is retained only for compatibility reasons;
+ users should use ``dpdk-dumpcap`` instead.
Test Environment
----------------
-The overview of using the Packet Capture Framework and the ``dpdk-pdump`` tool
+The overview of using the Packet Capture Framework and the ``dpdk-dumpcap`` utility
for packet capturing on the DPDK port in
:numref:`figure_packet_capture_framework`.
@@ -66,13 +70,13 @@ for packet capturing on the DPDK port in
.. figure:: img/packet_capture_framework.*
- Packet capturing on a DPDK port using the dpdk-pdump tool.
+ Packet capturing on a DPDK port using the dpdk-dumpcap utility.
Running the Application
-----------------------
-The following steps demonstrate how to run the ``dpdk-pdump`` tool to capture
+The following steps demonstrate how to run the ``dpdk-dumpcap`` tool to capture
Rx side packets on dpdk_port0 in :numref:`figure_packet_capture_framework` and
inspect them using ``tcpdump``.
@@ -80,16 +84,15 @@ inspect them using ``tcpdump``.
sudo <build_dir>/app/dpdk-testpmd -c 0xf0 -n 4 -- -i --port-topology=chained
-#. Launch the pdump tool as follows::
+#. Launch the dpdk-dump as follows::
- sudo <build_dir>/app/dpdk-pdump -- \
- --pdump 'port=0,queue=*,rx-dev=/tmp/capture.pcap'
+ sudo <build_dir>/app/dpdk-dumpcap -w /tmp/capture.pcapng
#. Send traffic to dpdk_port0 from traffic generator.
- Inspect packets captured in the file capture.pcap using a tool
- that can interpret Pcap files, for example tcpdump::
+ Inspect packets captured in the file capture.pcap using a tool such as
+ tcpdump or tshark that can interpret Pcapng files::
- $tcpdump -nr /tmp/capture.pcap
+ $ tcpdump -nr /tmp/capture.pcapng
reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)
11:11:36.891404 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
11:11:36.891442 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 2dce507f46a3..b440c77c2ba1 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -43,6 +43,7 @@ Programmer's Guide
ip_fragment_reassembly_lib
generic_receive_offload_lib
generic_segmentation_offload_lib
+ pcapng_lib
pdump_lib
multi_proc_support
kernel_nic_interface
diff --git a/doc/guides/prog_guide/pcapng_lib.rst b/doc/guides/prog_guide/pcapng_lib.rst
new file mode 100644
index 000000000000..36379b530a57
--- /dev/null
+++ b/doc/guides/prog_guide/pcapng_lib.rst
@@ -0,0 +1,24 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2016 Intel Corporation.
+
+.. _pcapng_library:
+
+Packet Capture File Writer
+==========================
+
+Pcapng is a library for creating files in Pcapng file format.
+The Pcapng file format is the default capture file format for modern
+network capture processing tools. It can be read by wireshark and tcpdump.
+
+Usage
+-----
+
+Before the library can be used the function ``rte_pcapng_init``
+should be called once to initialize timestamp computation.
+
+
+References
+----------
+* Draft RFC https://www.ietf.org/id/draft-tuexen-opsawg-pcapng-03.html
+
+* Project repository https://github.com/pcapng/pcapng/
diff --git a/doc/guides/prog_guide/pdump_lib.rst b/doc/guides/prog_guide/pdump_lib.rst
index 62c0b015b2fe..9af91415e5ea 100644
--- a/doc/guides/prog_guide/pdump_lib.rst
+++ b/doc/guides/prog_guide/pdump_lib.rst
@@ -3,10 +3,10 @@
.. _pdump_library:
-The librte_pdump Library
-========================
+The Packet Capture Library
+==========================
-The ``librte_pdump`` library provides a framework for packet capturing in DPDK.
+The DPDK ``pdump`` library provides a framework for packet capturing in DPDK.
The library does the complete copy of the Rx and Tx mbufs to a new mempool and
hence it slows down the performance of the applications, so it is recommended
to use this library for debugging purposes.
@@ -23,11 +23,19 @@ or disable the packet capture, and to uninitialize it.
* ``rte_pdump_enable()``:
This API enables the packet capture on a given port and queue.
- Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf()``
+ This API enables the packet capture on a given port and queue.
+ It also allows setting an optional filter using DPDK BPF interpreter and
+ setting the captured packet length.
* ``rte_pdump_enable_by_deviceid()``:
This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
- Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf_by_deviceid()``
+ This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
+ It also allows seating an optional filter using DPDK BPF interpreter and
+ setting the captured packet length.
* ``rte_pdump_disable()``:
This API disables the packet capture on a given port and queue.
@@ -61,6 +69,12 @@ and enables the packet capture by registering the Ethernet RX and TX callbacks f
and queue combinations. Then the primary process will mirror the packets to the new mempool and enqueue them to
the rte_ring that secondary process have passed to these APIs.
+The packet ring supports one of two formats. The default format enqueues copies of the original packets
+into the rte_ring. If the ``RTE_PDUMP_FLAG_PCAPNG`` is set the mbuf data is extended with header and trailer
+to match the format of Pcapng enhanced packet block. The enhanced packet block has meta-data such as the
+timestamp, port and queue the packet was captured on. It is up to the application consuming the
+packets from the ring to select the format desired.
+
The library APIs ``rte_pdump_disable()`` and ``rte_pdump_disable_by_deviceid()`` disables the packet capture.
For the calls to these APIs from secondary process, the library creates the "pdump disable" request and sends
the request to the primary process over the multi process channel. The primary process takes this request and
@@ -74,5 +88,5 @@ function.
Use Case: Packet Capturing
--------------------------
-The DPDK ``app/pdump`` tool is developed based on this library to capture packets in DPDK.
-Users can use this as an example to develop their own packet capturing tools.
+The DPDK ``app/dpdk-dumpcap`` utility uses this library
+to capture packets in DPDK.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 675b5738348b..ee24cbfdb99d 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -62,6 +62,16 @@ New Features
* Added bus-level parsing of the devargs syntax.
* Kept compatibility with the legacy syntax as parsing fallback.
+* **Enhance Packet capture.**
+
+ * New dpdk-dumpcap program that has most of the features of the
+ wireshark dumpcap utility including capture of multiple interfaces,
+ stopping after number of bytes, packets.
+ * New library for writing pcapng packet capture files.
+ * Enhancement to the pdump library to support:
+ * Packet filter with BPF.
+ * Pcapng format with timestamps and meta-data.
+ * Fixes packet capture with stripped VLAN tags.
Removed Items
-------------
diff --git a/doc/guides/tools/dumpcap.rst b/doc/guides/tools/dumpcap.rst
new file mode 100644
index 000000000000..664ea0c79802
--- /dev/null
+++ b/doc/guides/tools/dumpcap.rst
@@ -0,0 +1,86 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2020 Microsoft Corporation.
+
+.. _dumpcap_tool:
+
+dpdk-dumpcap Application
+========================
+
+The ``dpdk-dumpcap`` tool is a Data Plane Development Kit (DPDK)
+network traffic dump tool. The interface is similar to the dumpcap tool in Wireshark.
+It runs as a secondary DPDK process and lets you capture packets that are
+coming into and out of a DPDK primary process.
+The ``dpdk-dumpcap`` writes files in Pcapng packet format using
+capture file format is pcapng.
+
+Without any options set it will use DPDK to capture traffic from the first
+available DPDK interface and write the received raw packet data, along
+with timestamps into a pcapng file.
+
+If the ``-w`` option is not specified, ``dpdk-dumpcap`` writes to a newly
+create file with a name chosen based on interface name and timestamp.
+If ``-w`` option is specified, then that file is used.
+
+ .. Note::
+ * The ``dpdk-dumpcap`` tool can only be used in conjunction with a primary
+ application which has the packet capture framework initialized already.
+ In dpdk, only the ``testpmd`` is modified to initialize packet capture
+ framework, other applications remain untouched. So, if the ``dpdk-dumpcap``
+ tool has to be used with any application other than the testpmd, user
+ needs to explicitly modify that application to call packet capture
+ framework initialization code. Refer ``app/test-pmd/testpmd.c``
+ code to see how this is done.
+
+ * The ``dpdk-dumpcap`` tool runs as a DPDK secondary process. It exits when
+ the primary application exits.
+
+
+Running the Application
+-----------------------
+
+To list interfaces available for capture use ``--list-interfaces``.
+
+To filter packets in style of *tshark* use the ``-f`` flag.
+
+To capture on multiple interfaces at once, use multiple ``-I`` flags.
+
+Example
+-------
+
+.. code-block:: console
+
+ # ./<build_dir>/app/dpdk-dumpcap --list-interfaces
+ 0. 000:00:03.0
+ 1. 000:00:03.1
+
+ # ./<build_dir>/app/dpdk-dumpcap -I 0000:00:03.0 -c 6 -w /tmp/sample.pcapng
+ Packets captured: 6
+ Packets received/dropped on interface '0000:00:03.0' 6/0
+
+ # ./<build_dir>/app/dpdk-dumpcap -f 'tcp port 80'
+ Packets captured: 6
+ Packets received/dropped on interface '0000:00:03.0' 10/8
+
+
+Limitations
+-----------
+The following option of Wireshark ``dumpcap`` is not yet implemented:
+
+ * ``-b|--ring-buffer`` -- more complex file management.
+
+The following options do not make sense in the context of DPDK.
+
+ * ``-C <byte_limit>`` -- its a kernel thing
+
+ * ``-t`` -- use a thread per interface
+
+ * Timestamp type.
+
+ * Link data types. Only EN10MB (Ethernet) is supported.
+
+ * Wireless related options: ``-I|--monitor-mode`` and ``-k <freq>``
+
+
+.. Note::
+ * The options to ``dpdk-dumpcap`` are like the Wireshark dumpcap program and
+ are not the same as ``dpdk-pdump`` and other DPDK applications.
diff --git a/doc/guides/tools/index.rst b/doc/guides/tools/index.rst
index 93dde4148e90..b71c12b8f2dd 100644
--- a/doc/guides/tools/index.rst
+++ b/doc/guides/tools/index.rst
@@ -8,6 +8,7 @@ DPDK Tools User Guides
:maxdepth: 2
:numbered:
+ dumpcap
proc_info
pdump
pmdinfo
--
2.30.2
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v5 6/9] pdump: support pcapng and filtering
@ 2021-09-08 21:50 1% ` Stephen Hemminger
2021-09-08 21:50 1% ` [dpdk-dev] [PATCH v5 8/9] doc: changes for new pcapng and dumpcap Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-08 21:50 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
This enhances the DPDK pdump library to support new
pcapng format and filtering via BPF.
The internal client/server protocol is changed to support
two versions: the original pdump basic version and a
new pcapng version.
The internal version number (not part of exposed API or ABI)
is intentionally increased to cause any attempt to try
mismatched primary/secondary process to fail.
Add new API to do allow filtering of captured packets with
DPDK BPF (eBPF) filter program. It keeps statistics
on packets captured, filtered, and missed (because ring was full).
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
lib/meson.build | 4 +-
lib/pdump/meson.build | 2 +-
lib/pdump/rte_pdump.c | 437 ++++++++++++++++++++++++++++++------------
lib/pdump/rte_pdump.h | 110 ++++++++++-
lib/pdump/version.map | 8 +
5 files changed, 435 insertions(+), 126 deletions(-)
diff --git a/lib/meson.build b/lib/meson.build
index ba88e9eabc58..aacc33a0f0c0 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -26,6 +26,7 @@ libraries = [
'timer', # eventdev depends on this
'acl',
'bbdev',
+ 'bpf',
'bitratestats',
'cfgfile',
'compressdev',
@@ -43,7 +44,6 @@ libraries = [
'member',
'pcapng',
'power',
- 'pdump',
'rawdev',
'regexdev',
'rib',
@@ -55,10 +55,10 @@ libraries = [
'ipsec', # ipsec lib depends on net, crypto and security
'fib', #fib lib depends on rib
'port', # pkt framework libs which use other libs from above
+ 'pdump', # pdump lib depends on bpf pcapng
'table',
'pipeline',
'flow_classify', # flow_classify lib depends on pkt framework table lib
- 'bpf',
'graph',
'node',
]
diff --git a/lib/pdump/meson.build b/lib/pdump/meson.build
index 3a95eabde6a6..51ceb2afdec5 100644
--- a/lib/pdump/meson.build
+++ b/lib/pdump/meson.build
@@ -3,4 +3,4 @@
sources = files('rte_pdump.c')
headers = files('rte_pdump.h')
-deps += ['ethdev']
+deps += ['ethdev', 'bpf', 'pcapng']
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 382217bc1564..f2047ad9f001 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -7,8 +7,10 @@
#include <rte_ethdev.h>
#include <rte_lcore.h>
#include <rte_log.h>
+#include <rte_memzone.h>
#include <rte_errno.h>
#include <rte_string_fns.h>
+#include <rte_pcapng.h>
#include "rte_pdump.h"
@@ -27,30 +29,26 @@ enum pdump_operation {
ENABLE = 2
};
+/*
+ * Note: version numbers intentionally start at 3
+ * in order to catch any application built with older out
+ * version of DPDK using incompatible client request format.
+ */
enum pdump_version {
- V1 = 1
+ PDUMP_CLIENT_LEGACY = 3,
+ PDUMP_CLIENT_PCAPNG = 4,
};
struct pdump_request {
uint16_t ver;
uint16_t op;
- uint32_t flags;
- union pdump_data {
- struct enable_v1 {
- char device[RTE_DEV_NAME_MAX_LEN];
- uint16_t queue;
- struct rte_ring *ring;
- struct rte_mempool *mp;
- void *filter;
- } en_v1;
- struct disable_v1 {
- char device[RTE_DEV_NAME_MAX_LEN];
- uint16_t queue;
- struct rte_ring *ring;
- struct rte_mempool *mp;
- void *filter;
- } dis_v1;
- } data;
+ uint16_t flags;
+ uint16_t queue;
+ struct rte_ring *ring;
+ struct rte_mempool *mp;
+ const struct rte_bpf_prm *prm;
+ uint32_t snaplen;
+ char device[RTE_DEV_NAME_MAX_LEN];
};
struct pdump_response {
@@ -63,80 +61,140 @@ static struct pdump_rxtx_cbs {
struct rte_ring *ring;
struct rte_mempool *mp;
const struct rte_eth_rxtx_callback *cb;
- void *filter;
+ const struct rte_bpf *filter;
+ enum pdump_version ver;
+ uint32_t snaplen;
} rx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
tx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
-
-static inline void
-pdump_copy(struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
+static const char *MZ_RTE_PDUMP_STATS = "rte_pdump_stats";
+
+/* Shared memory between primary and secondary processes. */
+static struct {
+ struct rte_pdump_stats rx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+ struct rte_pdump_stats tx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+} *pdump_stats;
+
+/* Create a clone of mbuf to be placed into ring. */
+static void
+pdump_copy(uint16_t port_id, uint16_t queue,
+ enum rte_pcapng_direction direction,
+ struct rte_mbuf **pkts, uint16_t nb_pkts,
+ const struct pdump_rxtx_cbs *cbs,
+ struct rte_pdump_stats *stats)
{
unsigned int i;
int ring_enq;
uint16_t d_pkts = 0;
struct rte_mbuf *dup_bufs[nb_pkts];
- struct pdump_rxtx_cbs *cbs;
+ uint64_t ts;
struct rte_ring *ring;
struct rte_mempool *mp;
struct rte_mbuf *p;
+ uint64_t rcs[nb_pkts];
+
+ if (cbs->filter &&
+ rte_bpf_exec_burst(cbs->filter, (void **)pkts, rcs, nb_pkts) == 0) {
+ /* All packets were filtered out */
+ __atomic_fetch_add(&stats->filtered, nb_pkts,
+ __ATOMIC_RELAXED);
+ return;
+ }
- cbs = user_params;
+ ts = rte_get_tsc_cycles();
ring = cbs->ring;
mp = cbs->mp;
for (i = 0; i < nb_pkts; i++) {
- p = rte_pktmbuf_copy(pkts[i], mp, 0, UINT32_MAX);
- if (p)
+ /*
+ * Similar behavior to rte_bpf_eth callback.
+ * if BPF program returns zero value for a given packet,
+ * then it will be ignored.
+ */
+ if (cbs->filter && rcs[i] == 0) {
+ __atomic_fetch_add(&stats->filtered,
+ 1, __ATOMIC_RELAXED);
+ continue;
+ }
+
+ /*
+ * If using pcapng then want to wrap packets
+ * otherwise a simple copy.
+ */
+ if (cbs->ver == PDUMP_CLIENT_PCAPNG)
+ p = rte_pcapng_copy(port_id, queue,
+ pkts[i], mp, cbs->snaplen,
+ ts, direction);
+ else
+ p = rte_pktmbuf_copy(pkts[i], mp, 0, cbs->snaplen);
+
+ if (unlikely(p == NULL))
+ __atomic_fetch_add(&stats->nombuf, 1, __ATOMIC_RELAXED);
+ else
dup_bufs[d_pkts++] = p;
}
+ __atomic_fetch_add(&stats->accepted, d_pkts, __ATOMIC_RELAXED);
+
ring_enq = rte_ring_enqueue_burst(ring, (void *)dup_bufs, d_pkts, NULL);
if (unlikely(ring_enq < d_pkts)) {
unsigned int drops = d_pkts - ring_enq;
- PDUMP_LOG(DEBUG,
- "only %d of packets enqueued to ring\n", ring_enq);
+ __atomic_fetch_add(&stats->ringfull, drops, __ATOMIC_RELAXED);
rte_pktmbuf_free_bulk(&dup_bufs[ring_enq], drops);
}
}
static uint16_t
-pdump_rx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_rx(uint16_t port, uint16_t queue,
struct rte_mbuf **pkts, uint16_t nb_pkts,
- uint16_t max_pkts __rte_unused,
- void *user_params)
+ uint16_t max_pkts __rte_unused, void *user_params)
{
- pdump_copy(pkts, nb_pkts, user_params);
+ const struct pdump_rxtx_cbs *cbs = user_params;
+ struct rte_pdump_stats *stats = &pdump_stats->rx[port][queue];
+
+ pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_IN,
+ pkts, nb_pkts, cbs, stats);
return nb_pkts;
}
static uint16_t
-pdump_tx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_tx(uint16_t port, uint16_t queue,
struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
{
- pdump_copy(pkts, nb_pkts, user_params);
+ const struct pdump_rxtx_cbs *cbs = user_params;
+ struct rte_pdump_stats *stats = &pdump_stats->tx[port][queue];
+
+ pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_OUT,
+ pkts, nb_pkts, cbs, stats);
return nb_pkts;
}
static int
-pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
- struct rte_ring *ring, struct rte_mempool *mp,
- uint16_t operation)
+pdump_register_rx_callbacks(enum pdump_version ver,
+ uint16_t end_q, uint16_t port, uint16_t queue,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ struct rte_bpf *filter,
+ uint16_t operation, uint32_t snaplen)
{
uint16_t qid;
- struct pdump_rxtx_cbs *cbs = NULL;
qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
for (; qid < end_q; qid++) {
- cbs = &rx_cbs[port][qid];
- if (cbs && operation == ENABLE) {
+ struct pdump_rxtx_cbs *cbs = &rx_cbs[port][qid];
+
+ if (operation == ENABLE) {
if (cbs->cb) {
PDUMP_LOG(ERR,
"rx callback for port=%d queue=%d, already exists\n",
port, qid);
return -EEXIST;
}
+ cbs->ver = ver;
cbs->ring = ring;
cbs->mp = mp;
+ cbs->snaplen = snaplen;
+ cbs->filter = filter;
+
cbs->cb = rte_eth_add_first_rx_callback(port, qid,
pdump_rx, cbs);
if (cbs->cb == NULL) {
@@ -145,8 +203,7 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
rte_errno);
return rte_errno;
}
- }
- if (cbs && operation == DISABLE) {
+ } else if (operation == DISABLE) {
int ret;
if (cbs->cb == NULL) {
@@ -170,26 +227,32 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
}
static int
-pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
- struct rte_ring *ring, struct rte_mempool *mp,
- uint16_t operation)
+pdump_register_tx_callbacks(enum pdump_version ver,
+ uint16_t end_q, uint16_t port, uint16_t queue,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ struct rte_bpf *filter,
+ uint16_t operation, uint32_t snaplen)
{
uint16_t qid;
- struct pdump_rxtx_cbs *cbs = NULL;
qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
for (; qid < end_q; qid++) {
- cbs = &tx_cbs[port][qid];
- if (cbs && operation == ENABLE) {
+ struct pdump_rxtx_cbs *cbs = &tx_cbs[port][qid];
+
+ if (operation == ENABLE) {
if (cbs->cb) {
PDUMP_LOG(ERR,
"tx callback for port=%d queue=%d, already exists\n",
port, qid);
return -EEXIST;
}
+ cbs->ver = ver;
cbs->ring = ring;
cbs->mp = mp;
+ cbs->snaplen = snaplen;
+ cbs->filter = filter;
+
cbs->cb = rte_eth_add_tx_callback(port, qid, pdump_tx,
cbs);
if (cbs->cb == NULL) {
@@ -198,8 +261,7 @@ pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
rte_errno);
return rte_errno;
}
- }
- if (cbs && operation == DISABLE) {
+ } else if (operation == DISABLE) {
int ret;
if (cbs->cb == NULL) {
@@ -228,37 +290,47 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
uint16_t nb_rx_q = 0, nb_tx_q = 0, end_q, queue;
uint16_t port;
int ret = 0;
+ struct rte_bpf *filter = NULL;
uint32_t flags;
uint16_t operation;
struct rte_ring *ring;
struct rte_mempool *mp;
- flags = p->flags;
- operation = p->op;
- if (operation == ENABLE) {
- ret = rte_eth_dev_get_port_by_name(p->data.en_v1.device,
- &port);
- if (ret < 0) {
+ if (!(p->ver == PDUMP_CLIENT_LEGACY ||
+ p->ver == PDUMP_CLIENT_PCAPNG)) {
+ PDUMP_LOG(ERR,
+ "incorrect client version %u\n", p->ver);
+ return -EINVAL;
+ }
+
+ if (p->prm) {
+ if (p->prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF) {
PDUMP_LOG(ERR,
- "failed to get port id for device id=%s\n",
- p->data.en_v1.device);
+ "invalid BPF program type: %u\n",
+ p->prm->prog_arg.type);
return -EINVAL;
}
- queue = p->data.en_v1.queue;
- ring = p->data.en_v1.ring;
- mp = p->data.en_v1.mp;
- } else {
- ret = rte_eth_dev_get_port_by_name(p->data.dis_v1.device,
- &port);
- if (ret < 0) {
- PDUMP_LOG(ERR,
- "failed to get port id for device id=%s\n",
- p->data.dis_v1.device);
- return -EINVAL;
+
+ filter = rte_bpf_load(p->prm);
+ if (filter == NULL) {
+ PDUMP_LOG(ERR, "cannot load BPF filter: %s\n",
+ rte_strerror(rte_errno));
+ return -rte_errno;
}
- queue = p->data.dis_v1.queue;
- ring = p->data.dis_v1.ring;
- mp = p->data.dis_v1.mp;
+ }
+
+ flags = p->flags;
+ operation = p->op;
+ queue = p->queue;
+ ring = p->ring;
+ mp = p->mp;
+
+ ret = rte_eth_dev_get_port_by_name(p->device, &port);
+ if (ret < 0) {
+ PDUMP_LOG(ERR,
+ "failed to get port id for device id=%s\n",
+ p->device);
+ return -EINVAL;
}
/* validation if packet capture is for all queues */
@@ -296,8 +368,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
/* register RX callback */
if (flags & RTE_PDUMP_FLAG_RX) {
end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_rx_q : queue + 1;
- ret = pdump_register_rx_callbacks(end_q, port, queue, ring, mp,
- operation);
+ ret = pdump_register_rx_callbacks(p->ver, end_q, port, queue,
+ ring, mp, filter,
+ operation, p->snaplen);
if (ret < 0)
return ret;
}
@@ -305,8 +378,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
/* register TX callback */
if (flags & RTE_PDUMP_FLAG_TX) {
end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_tx_q : queue + 1;
- ret = pdump_register_tx_callbacks(end_q, port, queue, ring, mp,
- operation);
+ ret = pdump_register_tx_callbacks(p->ver, end_q, port, queue,
+ ring, mp, filter,
+ operation, p->snaplen);
if (ret < 0)
return ret;
}
@@ -347,8 +421,18 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
int
rte_pdump_init(void)
{
+ const struct rte_memzone *mz;
int ret;
+ mz = rte_memzone_reserve(MZ_RTE_PDUMP_STATS, sizeof(*pdump_stats),
+ rte_socket_id(), 0);
+ if (mz == NULL) {
+ PDUMP_LOG(ERR, "cannot allocate pdump statistics\n");
+ rte_errno = ENOMEM;
+ return -1;
+ }
+ pdump_stats = mz->addr;
+
ret = rte_mp_action_register(PDUMP_MP, pdump_server);
if (ret && rte_errno != ENOTSUP)
return -1;
@@ -392,14 +476,21 @@ pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp)
static int
pdump_validate_flags(uint32_t flags)
{
- if (flags != RTE_PDUMP_FLAG_RX && flags != RTE_PDUMP_FLAG_TX &&
- flags != RTE_PDUMP_FLAG_RXTX) {
+ if ((flags & RTE_PDUMP_FLAG_RXTX) == 0) {
PDUMP_LOG(ERR,
"invalid flags, should be either rx/tx/rxtx\n");
rte_errno = EINVAL;
return -1;
}
+ /* mask off the flags we know about */
+ if (flags & ~(RTE_PDUMP_FLAG_RXTX | RTE_PDUMP_FLAG_PCAPNG)) {
+ PDUMP_LOG(ERR,
+ "unknown flags: %#x\n", flags);
+ rte_errno = ENOTSUP;
+ return -1;
+ }
+
return 0;
}
@@ -426,12 +517,12 @@ pdump_validate_port(uint16_t port, char *name)
}
static int
-pdump_prepare_client_request(char *device, uint16_t queue,
- uint32_t flags,
- uint16_t operation,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+pdump_prepare_client_request(const char *device, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ uint16_t operation,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
{
int ret = -1;
struct rte_mp_msg mp_req, *mp_rep;
@@ -440,23 +531,23 @@ pdump_prepare_client_request(char *device, uint16_t queue,
struct pdump_request *req = (struct pdump_request *)mp_req.param;
struct pdump_response *resp;
- req->ver = 1;
- req->flags = flags;
+ memset(req, 0, sizeof(*req));
+ if (flags & RTE_PDUMP_FLAG_PCAPNG)
+ req->ver = PDUMP_CLIENT_PCAPNG;
+ else
+ req->ver = PDUMP_CLIENT_LEGACY;
+
+ req->flags = flags & RTE_PDUMP_FLAG_RXTX;
req->op = operation;
+ req->queue = queue;
+ strlcpy(req->device, device, sizeof(req->device));
+
if ((operation & ENABLE) != 0) {
- strlcpy(req->data.en_v1.device, device,
- sizeof(req->data.en_v1.device));
- req->data.en_v1.queue = queue;
- req->data.en_v1.ring = ring;
- req->data.en_v1.mp = mp;
- req->data.en_v1.filter = filter;
- } else {
- strlcpy(req->data.dis_v1.device, device,
- sizeof(req->data.dis_v1.device));
- req->data.dis_v1.queue = queue;
- req->data.dis_v1.ring = NULL;
- req->data.dis_v1.mp = NULL;
- req->data.dis_v1.filter = NULL;
+ req->queue = queue;
+ req->ring = ring;
+ req->mp = mp;
+ req->prm = prm;
+ req->snaplen = snaplen;
}
strlcpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
@@ -477,11 +568,17 @@ pdump_prepare_client_request(char *device, uint16_t queue,
return ret;
}
-int
-rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+/*
+ * There are two versions of this function, because although original API
+ * left place holder for future filter, it never checked the value.
+ * Therefore the API can't depend on application passing a non
+ * bogus value.
+ */
+static int
+pdump_enable(uint16_t port, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
{
int ret;
char name[RTE_DEV_NAME_MAX_LEN];
@@ -496,20 +593,42 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(name, queue, flags,
- ENABLE, ring, mp, filter);
+ if (snaplen == 0)
+ snaplen = UINT32_MAX;
- return ret;
+ return pdump_prepare_client_request(name, queue, flags, snaplen,
+ ENABLE, ring, mp, prm);
}
int
-rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
- uint32_t flags,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ void *filter __rte_unused)
{
- int ret = 0;
+ return pdump_enable(port, queue, flags, 0,
+ ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf(uint16_t port, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ return pdump_enable(port, queue, flags, snaplen,
+ ring, mp, prm);
+}
+
+static int
+pdump_enable_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ int ret;
ret = pdump_validate_ring_mp(ring, mp);
if (ret < 0)
@@ -518,10 +637,30 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(device_id, queue, flags,
- ENABLE, ring, mp, filter);
+ return pdump_prepare_client_request(device_id, queue, flags, snaplen,
+ ENABLE, ring, mp, prm);
+}
- return ret;
+int
+rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
+ uint32_t flags,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ void *filter __rte_unused)
+{
+ return pdump_enable_by_deviceid(device_id, queue, flags, 0,
+ ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ return pdump_enable_by_deviceid(device_id, queue, flags, snaplen,
+ ring, mp, prm);
}
int
@@ -537,8 +676,8 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags)
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(name, queue, flags,
- DISABLE, NULL, NULL, NULL);
+ ret = pdump_prepare_client_request(name, queue, flags, 0,
+ DISABLE, NULL, NULL, NULL);
return ret;
}
@@ -553,8 +692,66 @@ rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(device_id, queue, flags,
- DISABLE, NULL, NULL, NULL);
+ ret = pdump_prepare_client_request(device_id, queue, flags, 0,
+ DISABLE, NULL, NULL, NULL);
return ret;
}
+
+static void
+pdump_sum_stats(uint16_t port, uint16_t nq,
+ struct rte_pdump_stats stats[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
+ struct rte_pdump_stats *total)
+{
+ uint64_t *sum = (uint64_t *)total;
+ unsigned int i;
+ uint64_t val;
+ uint16_t qid;
+
+ for (qid = 0; qid < nq; qid++) {
+ const uint64_t *perq = (const uint64_t *)&stats[port][qid];
+
+ for (i = 0; i < sizeof(*total) / sizeof(uint64_t); i++) {
+ val = __atomic_load_n(&perq[i], __ATOMIC_RELAXED);
+ sum[i] += val;
+ }
+ }
+}
+
+int
+rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats)
+{
+ struct rte_eth_dev_info dev_info;
+ const struct rte_memzone *mz;
+ int ret;
+
+ memset(stats, 0, sizeof(*stats));
+ ret = rte_eth_dev_info_get(port, &dev_info);
+ if (ret != 0) {
+ PDUMP_LOG(ERR,
+ "Error during getting device (port %u) info: %s\n",
+ port, strerror(-ret));
+ return ret;
+ }
+
+ if (pdump_stats == NULL) {
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ PDUMP_LOG(ERR,
+ "pdump not initialized\n");
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ mz = rte_memzone_lookup(MZ_RTE_PDUMP_STATS);
+ if (mz == NULL) {
+ PDUMP_LOG(ERR, "can not find pdump stats\n");
+ rte_errno = EINVAL;
+ return -1;
+ }
+ pdump_stats = mz->addr;
+ }
+
+ pdump_sum_stats(port, dev_info.nb_rx_queues, pdump_stats->rx, stats);
+ pdump_sum_stats(port, dev_info.nb_tx_queues, pdump_stats->tx, stats);
+ return 0;
+}
diff --git a/lib/pdump/rte_pdump.h b/lib/pdump/rte_pdump.h
index 6b00fc17aeb2..be3fd14c4bd3 100644
--- a/lib/pdump/rte_pdump.h
+++ b/lib/pdump/rte_pdump.h
@@ -15,6 +15,7 @@
#include <stdint.h>
#include <rte_mempool.h>
#include <rte_ring.h>
+#include <rte_bpf.h>
#ifdef __cplusplus
extern "C" {
@@ -26,7 +27,9 @@ enum {
RTE_PDUMP_FLAG_RX = 1, /* receive direction */
RTE_PDUMP_FLAG_TX = 2, /* transmit direction */
/* both receive and transmit directions */
- RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX)
+ RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX),
+
+ RTE_PDUMP_FLAG_PCAPNG = 4, /* format for pcapng */
};
/**
@@ -68,7 +71,7 @@ rte_pdump_uninit(void);
* @param mp
* mempool on to which original packets will be mirrored or duplicated.
* @param filter
- * place holder for packet filtering.
+ * Unused should be NULL.
*
* @return
* 0 on success, -1 on error, rte_errno is set accordingly.
@@ -80,6 +83,41 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
struct rte_mempool *mp,
void *filter);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given port and queue with filtering.
+ *
+ * @param port_id
+ * The Ethernet port on which packet capturing should be enabled.
+ * @param queue
+ * The queue on the Ethernet port which packet capturing
+ * should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ * queues of a given port.
+ * @param flags
+ * Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ * The upper limit on bytes to copy.
+ * Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ * The ring on which captured packets will be enqueued for user.
+ * @param mp
+ * The mempool on to which original packets will be mirrored or duplicated.
+ * @param prm
+ * Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ * 0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf(uint16_t port_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm);
+
/**
* Disables packet capturing on given port and queue.
*
@@ -118,7 +156,7 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags);
* @param mp
* mempool on to which original packets will be mirrored or duplicated.
* @param filter
- * place holder for packet filtering.
+ * unused should be NULL
*
* @return
* 0 on success, -1 on error, rte_errno is set accordingly.
@@ -131,6 +169,43 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
struct rte_mempool *mp,
void *filter);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given device id and queue with filtering.
+ * device_id can be name or pci address of device.
+ *
+ * @param device_id
+ * device id on which packet capturing should be enabled.
+ * @param queue
+ * The queue on the Ethernet port which packet capturing
+ * should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ * queues of a given port.
+ * @param flags
+ * Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ * The upper limit on bytes to copy.
+ * Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ * The ring on which captured packets will be enqueued for user.
+ * @param mp
+ * The mempool on to which original packets will be mirrored or duplicated.
+ * @param filter
+ * Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ * 0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *filter);
+
+
/**
* Disables packet capturing on given device_id and queue.
* device_id can be name or pci address of device.
@@ -153,6 +228,35 @@ int
rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
uint32_t flags);
+
+/**
+ * A structure used to retrieve statistics from packet capture.
+ * The statistics are sum of both receive and transmit queues.
+ */
+struct rte_pdump_stats {
+ uint64_t accepted; /**< Number of packets accepted by filter. */
+ uint64_t filtered; /**< Number of packets rejected by filter. */
+ uint64_t nombuf; /**< Number of mbuf allocation failures. */
+ uint64_t ringfull; /**< Number of missed packets due to ring full. */
+
+ uint64_t reserved[4]; /**< Reserved and pad to cache line */
+};
+
+/**
+ * Retrieve the packet capture statistics for a queue.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param stats
+ * A pointer to structure of type *rte_pdump_stats* to be filled in.
+ * @return
+ * Zero if successful. -1 on error and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_pdump_stats(uint16_t port_id, struct rte_pdump_stats *stats);
+
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/pdump/version.map b/lib/pdump/version.map
index f0a9d12c9a9e..ce5502d9cdf4 100644
--- a/lib/pdump/version.map
+++ b/lib/pdump/version.map
@@ -10,3 +10,11 @@ DPDK_22 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ rte_pdump_enable_bpf;
+ rte_pdump_enable_bpf_by_deviceid;
+ rte_pdump_stats;
+};
--
2.30.2
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v4 7/8] doc: changes for new pcapng and dumpcap
2021-09-08 17:16 1% ` [dpdk-dev] [PATCH v4 5/8] pdump: support pcapng and filtering Stephen Hemminger
@ 2021-09-08 17:16 1% ` Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-08 17:16 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Describe the new packet capture library and utilities
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
.../howto/img/packet_capture_framework.svg | 96 +++++++++----------
doc/guides/howto/packet_capture_framework.rst | 67 ++++++-------
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/pcapng_lib.rst | 24 +++++
doc/guides/prog_guide/pdump_lib.rst | 28 ++++--
doc/guides/rel_notes/release_21_11.rst | 10 ++
doc/guides/tools/dumpcap.rst | 86 +++++++++++++++++
doc/guides/tools/index.rst | 1 +
10 files changed, 228 insertions(+), 87 deletions(-)
create mode 100644 doc/guides/prog_guide/pcapng_lib.rst
create mode 100644 doc/guides/tools/dumpcap.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 1992107a0356..ee07394d1c78 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -223,3 +223,4 @@ The public API headers are grouped by topics:
[experimental APIs] (@ref rte_compat.h),
[ABI versioning] (@ref rte_function_versioning.h),
[version] (@ref rte_version.h)
+ [pcapng] (@ref rte_pcapng.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 325a0195c6ab..aba17799a9a1 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -58,6 +58,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/metrics \
@TOPDIR@/lib/node \
@TOPDIR@/lib/net \
+ @TOPDIR@/lib/pcapng \
@TOPDIR@/lib/pci \
@TOPDIR@/lib/pdump \
@TOPDIR@/lib/pipeline \
diff --git a/doc/guides/howto/img/packet_capture_framework.svg b/doc/guides/howto/img/packet_capture_framework.svg
index a76baf71fdee..1c2646a81096 100644
--- a/doc/guides/howto/img/packet_capture_framework.svg
+++ b/doc/guides/howto/img/packet_capture_framework.svg
@@ -1,6 +1,4 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
<svg
xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
xmlns:dc="http://purl.org/dc/elements/1.1/"
@@ -16,8 +14,8 @@
viewBox="0 0 425.19685 283.46457"
id="svg2"
version="1.1"
- inkscape:version="0.91 r13725"
- sodipodi:docname="drawing-pcap.svg">
+ inkscape:version="1.0.2 (e86c870879, 2021-01-15)"
+ sodipodi:docname="packet_capture_framework.svg">
<defs
id="defs4">
<marker
@@ -228,7 +226,7 @@
x2="487.64606"
y2="258.38232"
gradientUnits="userSpaceOnUse"
- gradientTransform="translate(-84.916417,744.90779)" />
+ gradientTransform="matrix(1.1457977,0,0,0.99944907,-151.97019,745.05014)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient5784"
@@ -277,17 +275,18 @@
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
- inkscape:zoom="0.57434918"
- inkscape:cx="215.17857"
- inkscape:cy="285.26445"
+ inkscape:zoom="1"
+ inkscape:cx="226.77165"
+ inkscape:cy="78.124511"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
- inkscape:window-width="1874"
- inkscape:window-height="971"
- inkscape:window-x="2"
- inkscape:window-y="24"
- inkscape:window-maximized="0" />
+ inkscape:window-width="2560"
+ inkscape:window-height="1414"
+ inkscape:window-x="0"
+ inkscape:window-y="0"
+ inkscape:window-maximized="1"
+ inkscape:document-rotation="0" />
<metadata
id="metadata7">
<rdf:RDF>
@@ -296,7 +295,7 @@
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
- <dc:title></dc:title>
+ <dc:title />
</cc:Work>
</rdf:RDF>
</metadata>
@@ -321,15 +320,15 @@
y="790.82452" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="61.050636"
y="807.3205"
- id="text4152"
- sodipodi:linespacing="125%"><tspan
+ id="text4152"><tspan
sodipodi:role="line"
id="tspan4154"
x="61.050636"
- y="807.3205">DPDK Primary Application</tspan></text>
+ y="807.3205"
+ style="font-size:12.5px;line-height:1.25">DPDK Primary Application</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6"
@@ -339,19 +338,20 @@
y="827.01843" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="350.68585"
y="841.16058"
- id="text4189"
- sodipodi:linespacing="125%"><tspan
+ id="text4189"><tspan
sodipodi:role="line"
id="tspan4191"
x="350.68585"
- y="841.16058">dpdk-pdump</tspan><tspan
+ y="841.16058"
+ style="font-size:12.5px;line-height:1.25">dpdk-dumpcap</tspan><tspan
sodipodi:role="line"
x="350.68585"
y="856.78558"
- id="tspan4193">tool</tspan></text>
+ id="tspan4193"
+ style="font-size:12.5px;line-height:1.25">tool</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4"
@@ -361,15 +361,15 @@
y="891.16315" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="352.70612"
y="905.3053"
- id="text4189-1"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1"><tspan
sodipodi:role="line"
x="352.70612"
y="905.3053"
- id="tspan4193-3">PCAP PMD</tspan></text>
+ id="tspan4193-3"
+ style="font-size:12.5px;line-height:1.25">librte_pcapng</tspan></text>
<rect
style="fill:url(#linearGradient5745);fill-opacity:1;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-6"
@@ -379,15 +379,15 @@
y="923.9931" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="136.02846"
y="938.13525"
- id="text4189-0"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-0"><tspan
sodipodi:role="line"
x="136.02846"
y="938.13525"
- id="tspan4193-6">dpdk_port0</tspan></text>
+ id="tspan4193-6"
+ style="font-size:12.5px;line-height:1.25">dpdk_port0</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-5"
@@ -397,33 +397,33 @@
y="824.99817" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="137.54369"
y="839.14026"
- id="text4189-4"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-4"><tspan
sodipodi:role="line"
x="137.54369"
y="839.14026"
- id="tspan4193-2">librte_pdump</tspan></text>
+ id="tspan4193-2"
+ style="font-size:12.5px;line-height:1.25">librte_pdump</tspan></text>
<rect
- style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+ style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1.07013;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4-5"
- width="94.449265"
- height="35.355339"
- x="307.7804"
- y="985.61243" />
+ width="108.21974"
+ height="35.335861"
+ x="297.9809"
+ y="985.62219" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="352.70618"
y="999.75458"
- id="text4189-1-8"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1-8"><tspan
sodipodi:role="line"
x="352.70618"
y="999.75458"
- id="tspan4193-3-2">capture.pcap</tspan></text>
+ id="tspan4193-3-2"
+ style="font-size:12.5px;line-height:1.25">capture.pcapng</tspan></text>
<rect
style="fill:url(#linearGradient5788-1);fill-opacity:1;stroke:#257cdc;stroke-width:1.12555885;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4-5-1"
@@ -433,15 +433,15 @@
y="983.14984" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="136.53352"
y="1002.785"
- id="text4189-1-8-4"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1-8-4"><tspan
sodipodi:role="line"
x="136.53352"
y="1002.785"
- id="tspan4193-3-2-7">Traffic Generator</tspan></text>
+ id="tspan4193-3-2-7"
+ style="font-size:12.5px;line-height:1.25">Traffic Generator</tspan></text>
<path
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker7331)"
d="m 351.46948,927.02357 c 0,57.5787 0,57.5787 0,57.5787"
diff --git a/doc/guides/howto/packet_capture_framework.rst b/doc/guides/howto/packet_capture_framework.rst
index c31bac52340e..78baa609a021 100644
--- a/doc/guides/howto/packet_capture_framework.rst
+++ b/doc/guides/howto/packet_capture_framework.rst
@@ -1,18 +1,19 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright(c) 2017 Intel Corporation.
-DPDK pdump Library and pdump Tool
-=================================
+DPDK packet capture libraries and tools
+=======================================
This document describes how the Data Plane Development Kit (DPDK) Packet
Capture Framework is used for capturing packets on DPDK ports. It is intended
for users of DPDK who want to know more about the Packet Capture feature and
for those who want to monitor traffic on DPDK-controlled devices.
-The DPDK packet capture framework was introduced in DPDK v16.07. The DPDK
-packet capture framework consists of the DPDK pdump library and DPDK pdump
-tool.
-
+The DPDK packet capture framework was introduced in DPDK v16.07 and
+enhanced in 21.1. The DPDK packet capture framework consists of the
+libraries for collecting packets ``librte_pdump`` and writing packets
+to a file ``librte_pcapng``. There are two sample applications:
+``dpdk-dumpcap`` and older ``dpdk-pdump``.
Introduction
------------
@@ -22,43 +23,46 @@ allow users to initialize the packet capture framework and to enable or
disable packet capture. The library works on a multi process communication model and its
usage is recommended for debugging purposes.
-The :ref:`dpdk-pdump <pdump_tool>` tool is developed based on the
-``librte_pdump`` library. It runs as a DPDK secondary process and is capable
-of enabling or disabling packet capture on DPDK ports. The ``dpdk-pdump`` tool
-provides command-line options with which users can request enabling or
-disabling of the packet capture on DPDK ports.
+The :ref:`librte_pcapng <pcapng_library>` library provides the APIs to format
+packets and write them to a file in Pcapng format.
+
+
+The :ref:`dpdk-dumpcap <dumpcap_tool>` is a tool that captures packets in
+like Wireshark dumpcap does for Linux. It runs as a DPDK secondary process and
+captures packets from one or more interfaces and writes them to a file
+in Pcapng format. The ``dpdk-dumpcap`` tool is designed to take
+most of the same options as the Wireshark ``dumpcap`` command.
-The application which initializes the packet capture framework will be a primary process
-and the application that enables or disables the packet capture will
-be a secondary process. The primary process sends the Rx and Tx packets from the DPDK ports
-to the secondary process.
+Without any options it will use the packet capture framework to
+capture traffic from the first available DPDK port.
In DPDK the ``testpmd`` application can be used to initialize the packet
-capture framework and acts as a server, and the ``dpdk-pdump`` tool acts as a
+capture framework and acts as a server, and the ``dpdk-dumpcap`` tool acts as a
client. To view Rx or Tx packets of ``testpmd``, the application should be
-launched first, and then the ``dpdk-pdump`` tool. Packets from ``testpmd``
-will be sent to the tool, which then sends them on to the Pcap PMD device and
-that device writes them to the Pcap file or to an external interface depending
-on the command-line option used.
+launched first, and then the ``dpdk-dumpcap`` tool. Packets from ``testpmd``
+will be sent to the tool, and then to the Pcapng file.
Some things to note:
-* The ``dpdk-pdump`` tool can only be used in conjunction with a primary
+* All tools using ``librte_pdump`` can only be used in conjunction with a primary
application which has the packet capture framework initialized already. In
dpdk, only ``testpmd`` is modified to initialize packet capture framework,
- other applications remain untouched. So, if the ``dpdk-pdump`` tool has to
+ other applications remain untouched. So, if the ``dpdk-dumpcap`` tool has to
be used with any application other than the testpmd, the user needs to
explicitly modify that application to call the packet capture framework
initialization code. Refer to the ``app/test-pmd/testpmd.c`` code and look
for ``pdump`` keyword to see how this is done.
-* The ``dpdk-pdump`` tool depends on the libpcap based PMD.
+* The ``dpdk-pdump`` tool is an older tool created as demonstration of ``librte_pdump``
+ library. The ``dpdk-pdump`` tool provides more limited functionality and
+ and depends on the Pcap PMD. It is retained only for compatibility reasons;
+ users should use ``dpdk-dumpcap`` instead.
Test Environment
----------------
-The overview of using the Packet Capture Framework and the ``dpdk-pdump`` tool
+The overview of using the Packet Capture Framework and the ``dpdk-dumpcap`` utility
for packet capturing on the DPDK port in
:numref:`figure_packet_capture_framework`.
@@ -66,13 +70,13 @@ for packet capturing on the DPDK port in
.. figure:: img/packet_capture_framework.*
- Packet capturing on a DPDK port using the dpdk-pdump tool.
+ Packet capturing on a DPDK port using the dpdk-dumpcap utility.
Running the Application
-----------------------
-The following steps demonstrate how to run the ``dpdk-pdump`` tool to capture
+The following steps demonstrate how to run the ``dpdk-dumpcap`` tool to capture
Rx side packets on dpdk_port0 in :numref:`figure_packet_capture_framework` and
inspect them using ``tcpdump``.
@@ -80,16 +84,15 @@ inspect them using ``tcpdump``.
sudo <build_dir>/app/dpdk-testpmd -c 0xf0 -n 4 -- -i --port-topology=chained
-#. Launch the pdump tool as follows::
+#. Launch the dpdk-dump as follows::
- sudo <build_dir>/app/dpdk-pdump -- \
- --pdump 'port=0,queue=*,rx-dev=/tmp/capture.pcap'
+ sudo <build_dir>/app/dpdk-dumpcap -w /tmp/capture.pcapng
#. Send traffic to dpdk_port0 from traffic generator.
- Inspect packets captured in the file capture.pcap using a tool
- that can interpret Pcap files, for example tcpdump::
+ Inspect packets captured in the file capture.pcap using a tool such as
+ tcpdump or tshark that can interpret Pcapng files::
- $tcpdump -nr /tmp/capture.pcap
+ $ tcpdump -nr /tmp/capture.pcapng
reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)
11:11:36.891404 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
11:11:36.891442 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 2dce507f46a3..b440c77c2ba1 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -43,6 +43,7 @@ Programmer's Guide
ip_fragment_reassembly_lib
generic_receive_offload_lib
generic_segmentation_offload_lib
+ pcapng_lib
pdump_lib
multi_proc_support
kernel_nic_interface
diff --git a/doc/guides/prog_guide/pcapng_lib.rst b/doc/guides/prog_guide/pcapng_lib.rst
new file mode 100644
index 000000000000..36379b530a57
--- /dev/null
+++ b/doc/guides/prog_guide/pcapng_lib.rst
@@ -0,0 +1,24 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2016 Intel Corporation.
+
+.. _pcapng_library:
+
+Packet Capture File Writer
+==========================
+
+Pcapng is a library for creating files in Pcapng file format.
+The Pcapng file format is the default capture file format for modern
+network capture processing tools. It can be read by wireshark and tcpdump.
+
+Usage
+-----
+
+Before the library can be used the function ``rte_pcapng_init``
+should be called once to initialize timestamp computation.
+
+
+References
+----------
+* Draft RFC https://www.ietf.org/id/draft-tuexen-opsawg-pcapng-03.html
+
+* Project repository https://github.com/pcapng/pcapng/
diff --git a/doc/guides/prog_guide/pdump_lib.rst b/doc/guides/prog_guide/pdump_lib.rst
index 62c0b015b2fe..9af91415e5ea 100644
--- a/doc/guides/prog_guide/pdump_lib.rst
+++ b/doc/guides/prog_guide/pdump_lib.rst
@@ -3,10 +3,10 @@
.. _pdump_library:
-The librte_pdump Library
-========================
+The Packet Capture Library
+==========================
-The ``librte_pdump`` library provides a framework for packet capturing in DPDK.
+The DPDK ``pdump`` library provides a framework for packet capturing in DPDK.
The library does the complete copy of the Rx and Tx mbufs to a new mempool and
hence it slows down the performance of the applications, so it is recommended
to use this library for debugging purposes.
@@ -23,11 +23,19 @@ or disable the packet capture, and to uninitialize it.
* ``rte_pdump_enable()``:
This API enables the packet capture on a given port and queue.
- Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf()``
+ This API enables the packet capture on a given port and queue.
+ It also allows setting an optional filter using DPDK BPF interpreter and
+ setting the captured packet length.
* ``rte_pdump_enable_by_deviceid()``:
This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
- Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf_by_deviceid()``
+ This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
+ It also allows seating an optional filter using DPDK BPF interpreter and
+ setting the captured packet length.
* ``rte_pdump_disable()``:
This API disables the packet capture on a given port and queue.
@@ -61,6 +69,12 @@ and enables the packet capture by registering the Ethernet RX and TX callbacks f
and queue combinations. Then the primary process will mirror the packets to the new mempool and enqueue them to
the rte_ring that secondary process have passed to these APIs.
+The packet ring supports one of two formats. The default format enqueues copies of the original packets
+into the rte_ring. If the ``RTE_PDUMP_FLAG_PCAPNG`` is set the mbuf data is extended with header and trailer
+to match the format of Pcapng enhanced packet block. The enhanced packet block has meta-data such as the
+timestamp, port and queue the packet was captured on. It is up to the application consuming the
+packets from the ring to select the format desired.
+
The library APIs ``rte_pdump_disable()`` and ``rte_pdump_disable_by_deviceid()`` disables the packet capture.
For the calls to these APIs from secondary process, the library creates the "pdump disable" request and sends
the request to the primary process over the multi process channel. The primary process takes this request and
@@ -74,5 +88,5 @@ function.
Use Case: Packet Capturing
--------------------------
-The DPDK ``app/pdump`` tool is developed based on this library to capture packets in DPDK.
-Users can use this as an example to develop their own packet capturing tools.
+The DPDK ``app/dpdk-dumpcap`` utility uses this library
+to capture packets in DPDK.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 675b5738348b..ee24cbfdb99d 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -62,6 +62,16 @@ New Features
* Added bus-level parsing of the devargs syntax.
* Kept compatibility with the legacy syntax as parsing fallback.
+* **Enhance Packet capture.**
+
+ * New dpdk-dumpcap program that has most of the features of the
+ wireshark dumpcap utility including capture of multiple interfaces,
+ stopping after number of bytes, packets.
+ * New library for writing pcapng packet capture files.
+ * Enhancement to the pdump library to support:
+ * Packet filter with BPF.
+ * Pcapng format with timestamps and meta-data.
+ * Fixes packet capture with stripped VLAN tags.
Removed Items
-------------
diff --git a/doc/guides/tools/dumpcap.rst b/doc/guides/tools/dumpcap.rst
new file mode 100644
index 000000000000..664ea0c79802
--- /dev/null
+++ b/doc/guides/tools/dumpcap.rst
@@ -0,0 +1,86 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2020 Microsoft Corporation.
+
+.. _dumpcap_tool:
+
+dpdk-dumpcap Application
+========================
+
+The ``dpdk-dumpcap`` tool is a Data Plane Development Kit (DPDK)
+network traffic dump tool. The interface is similar to the dumpcap tool in Wireshark.
+It runs as a secondary DPDK process and lets you capture packets that are
+coming into and out of a DPDK primary process.
+The ``dpdk-dumpcap`` writes files in Pcapng packet format using
+capture file format is pcapng.
+
+Without any options set it will use DPDK to capture traffic from the first
+available DPDK interface and write the received raw packet data, along
+with timestamps into a pcapng file.
+
+If the ``-w`` option is not specified, ``dpdk-dumpcap`` writes to a newly
+create file with a name chosen based on interface name and timestamp.
+If ``-w`` option is specified, then that file is used.
+
+ .. Note::
+ * The ``dpdk-dumpcap`` tool can only be used in conjunction with a primary
+ application which has the packet capture framework initialized already.
+ In dpdk, only the ``testpmd`` is modified to initialize packet capture
+ framework, other applications remain untouched. So, if the ``dpdk-dumpcap``
+ tool has to be used with any application other than the testpmd, user
+ needs to explicitly modify that application to call packet capture
+ framework initialization code. Refer ``app/test-pmd/testpmd.c``
+ code to see how this is done.
+
+ * The ``dpdk-dumpcap`` tool runs as a DPDK secondary process. It exits when
+ the primary application exits.
+
+
+Running the Application
+-----------------------
+
+To list interfaces available for capture use ``--list-interfaces``.
+
+To filter packets in style of *tshark* use the ``-f`` flag.
+
+To capture on multiple interfaces at once, use multiple ``-I`` flags.
+
+Example
+-------
+
+.. code-block:: console
+
+ # ./<build_dir>/app/dpdk-dumpcap --list-interfaces
+ 0. 000:00:03.0
+ 1. 000:00:03.1
+
+ # ./<build_dir>/app/dpdk-dumpcap -I 0000:00:03.0 -c 6 -w /tmp/sample.pcapng
+ Packets captured: 6
+ Packets received/dropped on interface '0000:00:03.0' 6/0
+
+ # ./<build_dir>/app/dpdk-dumpcap -f 'tcp port 80'
+ Packets captured: 6
+ Packets received/dropped on interface '0000:00:03.0' 10/8
+
+
+Limitations
+-----------
+The following option of Wireshark ``dumpcap`` is not yet implemented:
+
+ * ``-b|--ring-buffer`` -- more complex file management.
+
+The following options do not make sense in the context of DPDK.
+
+ * ``-C <byte_limit>`` -- its a kernel thing
+
+ * ``-t`` -- use a thread per interface
+
+ * Timestamp type.
+
+ * Link data types. Only EN10MB (Ethernet) is supported.
+
+ * Wireless related options: ``-I|--monitor-mode`` and ``-k <freq>``
+
+
+.. Note::
+ * The options to ``dpdk-dumpcap`` are like the Wireshark dumpcap program and
+ are not the same as ``dpdk-pdump`` and other DPDK applications.
diff --git a/doc/guides/tools/index.rst b/doc/guides/tools/index.rst
index 93dde4148e90..b71c12b8f2dd 100644
--- a/doc/guides/tools/index.rst
+++ b/doc/guides/tools/index.rst
@@ -8,6 +8,7 @@ DPDK Tools User Guides
:maxdepth: 2
:numbered:
+ dumpcap
proc_info
pdump
pmdinfo
--
2.30.2
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v4 5/8] pdump: support pcapng and filtering
@ 2021-09-08 17:16 1% ` Stephen Hemminger
2021-09-08 17:16 1% ` [dpdk-dev] [PATCH v4 7/8] doc: changes for new pcapng and dumpcap Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-08 17:16 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
This enhances the DPDK pdump library to support new
pcapng format and filtering via BPF.
The internal client/server protocol is changed to support
two versions: the original pdump basic version and a
new pcapng version.
The internal version number (not part of exposed API or ABI)
is intentionally increased to cause any attempt to try
mismatched primary/secondary process to fail.
Add new API to do allow filtering of captured packets with
DPDK BPF (eBPF) filter program. It keeps statistics
on packets captured, filtered, and missed (because ring was full).
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
lib/meson.build | 4 +-
lib/pdump/meson.build | 2 +-
lib/pdump/rte_pdump.c | 437 ++++++++++++++++++++++++++++++------------
lib/pdump/rte_pdump.h | 110 ++++++++++-
lib/pdump/version.map | 8 +
5 files changed, 435 insertions(+), 126 deletions(-)
diff --git a/lib/meson.build b/lib/meson.build
index 51bf9c2d11f0..4b01949fe715 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -26,6 +26,7 @@ libraries = [
'timer', # eventdev depends on this
'acl',
'bbdev',
+ 'bpf',
'bitratestats',
'cfgfile',
'compressdev',
@@ -43,7 +44,6 @@ libraries = [
'member',
'pcapng',
'power',
- 'pdump',
'rawdev',
'regexdev',
'rib',
@@ -55,10 +55,10 @@ libraries = [
'ipsec', # ipsec lib depends on net, crypto and security
'fib', #fib lib depends on rib
'port', # pkt framework libs which use other libs from above
+ 'pdump', # pdump lib depends on bpf pcapng
'table',
'pipeline',
'flow_classify', # flow_classify lib depends on pkt framework table lib
- 'bpf',
'graph',
'node',
]
diff --git a/lib/pdump/meson.build b/lib/pdump/meson.build
index 3a95eabde6a6..51ceb2afdec5 100644
--- a/lib/pdump/meson.build
+++ b/lib/pdump/meson.build
@@ -3,4 +3,4 @@
sources = files('rte_pdump.c')
headers = files('rte_pdump.h')
-deps += ['ethdev']
+deps += ['ethdev', 'bpf', 'pcapng']
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 382217bc1564..9cb31d6b5008 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -7,8 +7,10 @@
#include <rte_ethdev.h>
#include <rte_lcore.h>
#include <rte_log.h>
+#include <rte_memzone.h>
#include <rte_errno.h>
#include <rte_string_fns.h>
+#include <rte_pcapng.h>
#include "rte_pdump.h"
@@ -27,30 +29,26 @@ enum pdump_operation {
ENABLE = 2
};
+/*
+ * Note: version numbers intentionally start at 3
+ * in order to catch any application built with older out
+ * version of DPDK using incompatible client request format.
+ */
enum pdump_version {
- V1 = 1
+ PDUMP_CLIENT_LEGACY = 3,
+ PDUMP_CLIENT_PCAPNG = 4,
};
struct pdump_request {
uint16_t ver;
uint16_t op;
- uint32_t flags;
- union pdump_data {
- struct enable_v1 {
- char device[RTE_DEV_NAME_MAX_LEN];
- uint16_t queue;
- struct rte_ring *ring;
- struct rte_mempool *mp;
- void *filter;
- } en_v1;
- struct disable_v1 {
- char device[RTE_DEV_NAME_MAX_LEN];
- uint16_t queue;
- struct rte_ring *ring;
- struct rte_mempool *mp;
- void *filter;
- } dis_v1;
- } data;
+ uint16_t flags;
+ uint16_t queue;
+ struct rte_ring *ring;
+ struct rte_mempool *mp;
+ const struct rte_bpf_prm *prm;
+ uint32_t snaplen;
+ char device[RTE_DEV_NAME_MAX_LEN];
};
struct pdump_response {
@@ -63,80 +61,140 @@ static struct pdump_rxtx_cbs {
struct rte_ring *ring;
struct rte_mempool *mp;
const struct rte_eth_rxtx_callback *cb;
- void *filter;
+ const struct rte_bpf *filter;
+ enum pdump_version ver;
+ uint32_t snaplen;
} rx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
tx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
-
-static inline void
-pdump_copy(struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
+static const char *MZ_RTE_PDUMP_STATS = "rte_pdump_stats";
+
+/* Shared memory between primary and secondary processes. */
+static struct {
+ struct rte_pdump_stats rx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+ struct rte_pdump_stats tx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+} *pdump_stats;
+
+/* Create a clone of mbuf to be placed into ring. */
+static void
+pdump_copy(uint16_t port_id, uint16_t queue,
+ enum rte_pcapng_direction direction,
+ struct rte_mbuf **pkts, uint16_t nb_pkts,
+ const struct pdump_rxtx_cbs *cbs,
+ struct rte_pdump_stats *stats)
{
unsigned int i;
int ring_enq;
uint16_t d_pkts = 0;
struct rte_mbuf *dup_bufs[nb_pkts];
- struct pdump_rxtx_cbs *cbs;
+ uint64_t ts;
struct rte_ring *ring;
struct rte_mempool *mp;
struct rte_mbuf *p;
+ uint64_t rcs[nb_pkts];
+
+ if (cbs->filter &&
+ rte_bpf_exec_burst(cbs->filter, (void **)pkts, rcs, nb_pkts) == 0) {
+ /* All packets were filtered out */
+ __atomic_fetch_add(&stats->filtered, nb_pkts,
+ __ATOMIC_RELAXED);
+ return;
+ }
- cbs = user_params;
+ ts = rte_get_tsc_cycles();
ring = cbs->ring;
mp = cbs->mp;
for (i = 0; i < nb_pkts; i++) {
- p = rte_pktmbuf_copy(pkts[i], mp, 0, UINT32_MAX);
- if (p)
+ /*
+ * Similar behavior to rte_bpf_eth callback.
+ * if BPF program returns zero value for a given packet,
+ * then it will be ignored.
+ */
+ if (cbs->filter && rcs[i] == 0) {
+ __atomic_fetch_add(&stats->filtered,
+ 1, __ATOMIC_RELAXED);
+ continue;
+ }
+
+ /*
+ * If using pcapng then want to wrap packets
+ * otherwise a simple copy.
+ */
+ if (cbs->ver == PDUMP_CLIENT_PCAPNG)
+ p = rte_pcapng_copy(port_id, queue,
+ pkts[i], mp, cbs->snaplen,
+ ts, direction);
+ else
+ p = rte_pktmbuf_copy(pkts[i], mp, 0, cbs->snaplen);
+
+ if (unlikely(p == NULL))
+ __atomic_fetch_add(&stats->nombuf, 1, __ATOMIC_RELAXED);
+ else
dup_bufs[d_pkts++] = p;
}
+ __atomic_fetch_add(&stats->accepted, d_pkts, __ATOMIC_RELAXED);
+
ring_enq = rte_ring_enqueue_burst(ring, (void *)dup_bufs, d_pkts, NULL);
if (unlikely(ring_enq < d_pkts)) {
unsigned int drops = d_pkts - ring_enq;
- PDUMP_LOG(DEBUG,
- "only %d of packets enqueued to ring\n", ring_enq);
+ __atomic_fetch_add(&stats->ringfull, drops, __ATOMIC_RELAXED);
rte_pktmbuf_free_bulk(&dup_bufs[ring_enq], drops);
}
}
static uint16_t
-pdump_rx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_rx(uint16_t port, uint16_t queue,
struct rte_mbuf **pkts, uint16_t nb_pkts,
- uint16_t max_pkts __rte_unused,
- void *user_params)
+ uint16_t max_pkts __rte_unused, void *user_params)
{
- pdump_copy(pkts, nb_pkts, user_params);
+ const struct pdump_rxtx_cbs *cbs = user_params;
+ struct rte_pdump_stats *stats = &pdump_stats->rx[port][queue];
+
+ pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_IN,
+ pkts, nb_pkts, cbs, stats);
return nb_pkts;
}
static uint16_t
-pdump_tx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_tx(uint16_t port, uint16_t queue,
struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
{
- pdump_copy(pkts, nb_pkts, user_params);
+ const struct pdump_rxtx_cbs *cbs = user_params;
+ struct rte_pdump_stats *stats = &pdump_stats->tx[port][queue];
+
+ pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_OUT,
+ pkts, nb_pkts, cbs, stats);
return nb_pkts;
}
static int
-pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
- struct rte_ring *ring, struct rte_mempool *mp,
- uint16_t operation)
+pdump_register_rx_callbacks(enum pdump_version ver,
+ uint16_t end_q, uint16_t port, uint16_t queue,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ struct rte_bpf *filter,
+ uint16_t operation, uint32_t snaplen)
{
uint16_t qid;
- struct pdump_rxtx_cbs *cbs = NULL;
qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
for (; qid < end_q; qid++) {
- cbs = &rx_cbs[port][qid];
- if (cbs && operation == ENABLE) {
+ struct pdump_rxtx_cbs *cbs = &rx_cbs[port][qid];
+
+ if (operation == ENABLE) {
if (cbs->cb) {
PDUMP_LOG(ERR,
"rx callback for port=%d queue=%d, already exists\n",
port, qid);
return -EEXIST;
}
+ cbs->ver = ver;
cbs->ring = ring;
cbs->mp = mp;
+ cbs->snaplen = snaplen;
+ cbs->filter = filter;
+
cbs->cb = rte_eth_add_first_rx_callback(port, qid,
pdump_rx, cbs);
if (cbs->cb == NULL) {
@@ -145,8 +203,7 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
rte_errno);
return rte_errno;
}
- }
- if (cbs && operation == DISABLE) {
+ } else if (operation == DISABLE) {
int ret;
if (cbs->cb == NULL) {
@@ -170,26 +227,32 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
}
static int
-pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
- struct rte_ring *ring, struct rte_mempool *mp,
- uint16_t operation)
+pdump_register_tx_callbacks(enum pdump_version ver,
+ uint16_t end_q, uint16_t port, uint16_t queue,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ struct rte_bpf *filter,
+ uint16_t operation, uint32_t snaplen)
{
uint16_t qid;
- struct pdump_rxtx_cbs *cbs = NULL;
qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
for (; qid < end_q; qid++) {
- cbs = &tx_cbs[port][qid];
- if (cbs && operation == ENABLE) {
+ struct pdump_rxtx_cbs *cbs = &tx_cbs[port][qid];
+
+ if (operation == ENABLE) {
if (cbs->cb) {
PDUMP_LOG(ERR,
"tx callback for port=%d queue=%d, already exists\n",
port, qid);
return -EEXIST;
}
+ cbs->ver = ver;
cbs->ring = ring;
cbs->mp = mp;
+ cbs->snaplen = snaplen;
+ cbs->filter = filter;
+
cbs->cb = rte_eth_add_tx_callback(port, qid, pdump_tx,
cbs);
if (cbs->cb == NULL) {
@@ -198,8 +261,7 @@ pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
rte_errno);
return rte_errno;
}
- }
- if (cbs && operation == DISABLE) {
+ } else if (operation == DISABLE) {
int ret;
if (cbs->cb == NULL) {
@@ -228,37 +290,47 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
uint16_t nb_rx_q = 0, nb_tx_q = 0, end_q, queue;
uint16_t port;
int ret = 0;
+ struct rte_bpf *filter = NULL;
uint32_t flags;
uint16_t operation;
struct rte_ring *ring;
struct rte_mempool *mp;
- flags = p->flags;
- operation = p->op;
- if (operation == ENABLE) {
- ret = rte_eth_dev_get_port_by_name(p->data.en_v1.device,
- &port);
- if (ret < 0) {
+ if (!(p->ver == PDUMP_CLIENT_LEGACY ||
+ p->ver == PDUMP_CLIENT_PCAPNG)) {
+ PDUMP_LOG(ERR,
+ "incorrect client version %u\n", p->ver);
+ return -EINVAL;
+ }
+
+ if (p->prm) {
+ if (p->prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF) {
PDUMP_LOG(ERR,
- "failed to get port id for device id=%s\n",
- p->data.en_v1.device);
+ "invalid BPF program type: %u\n",
+ p->prm->prog_arg.type);
return -EINVAL;
}
- queue = p->data.en_v1.queue;
- ring = p->data.en_v1.ring;
- mp = p->data.en_v1.mp;
- } else {
- ret = rte_eth_dev_get_port_by_name(p->data.dis_v1.device,
- &port);
- if (ret < 0) {
- PDUMP_LOG(ERR,
- "failed to get port id for device id=%s\n",
- p->data.dis_v1.device);
- return -EINVAL;
+
+ filter = rte_bpf_load(p->prm);
+ if (filter == NULL) {
+ PDUMP_LOG(ERR, "cannot load BPF filter: %s\n",
+ rte_strerror(rte_errno));
+ return -rte_errno;
}
- queue = p->data.dis_v1.queue;
- ring = p->data.dis_v1.ring;
- mp = p->data.dis_v1.mp;
+ }
+
+ flags = p->flags;
+ operation = p->op;
+ queue = p->queue;
+ ring = p->ring;
+ mp = p->mp;
+
+ ret = rte_eth_dev_get_port_by_name(p->device, &port);
+ if (ret < 0) {
+ PDUMP_LOG(ERR,
+ "failed to get port id for device id=%s\n",
+ p->device);
+ return -EINVAL;
}
/* validation if packet capture is for all queues */
@@ -296,8 +368,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
/* register RX callback */
if (flags & RTE_PDUMP_FLAG_RX) {
end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_rx_q : queue + 1;
- ret = pdump_register_rx_callbacks(end_q, port, queue, ring, mp,
- operation);
+ ret = pdump_register_rx_callbacks(p->ver, end_q, port, queue,
+ ring, mp, filter,
+ operation, p->snaplen);
if (ret < 0)
return ret;
}
@@ -305,8 +378,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
/* register TX callback */
if (flags & RTE_PDUMP_FLAG_TX) {
end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_tx_q : queue + 1;
- ret = pdump_register_tx_callbacks(end_q, port, queue, ring, mp,
- operation);
+ ret = pdump_register_tx_callbacks(p->ver, end_q, port, queue,
+ ring, mp, filter,
+ operation, p->snaplen);
if (ret < 0)
return ret;
}
@@ -347,8 +421,18 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
int
rte_pdump_init(void)
{
+ const struct rte_memzone *mz;
int ret;
+ mz = rte_memzone_reserve(MZ_RTE_PDUMP_STATS, sizeof(*pdump_stats),
+ rte_socket_id(), 0);
+ if (mz == NULL) {
+ PDUMP_LOG(ERR, "cannot allocate pdump statistics\n");
+ rte_errno = ENOMEM;
+ return -1;
+ }
+ pdump_stats = mz->addr;
+
ret = rte_mp_action_register(PDUMP_MP, pdump_server);
if (ret && rte_errno != ENOTSUP)
return -1;
@@ -392,14 +476,21 @@ pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp)
static int
pdump_validate_flags(uint32_t flags)
{
- if (flags != RTE_PDUMP_FLAG_RX && flags != RTE_PDUMP_FLAG_TX &&
- flags != RTE_PDUMP_FLAG_RXTX) {
+ if ((flags & RTE_PDUMP_FLAG_RXTX) == 0) {
PDUMP_LOG(ERR,
"invalid flags, should be either rx/tx/rxtx\n");
rte_errno = EINVAL;
return -1;
}
+ /* mask off the flags we know about */
+ if (flags & ~(RTE_PDUMP_FLAG_RXTX | RTE_PDUMP_FLAG_PCAPNG)) {
+ PDUMP_LOG(ERR,
+ "unknown flags: %#x\n", flags);
+ rte_errno = ENOTSUP;
+ return -1;
+ }
+
return 0;
}
@@ -426,12 +517,12 @@ pdump_validate_port(uint16_t port, char *name)
}
static int
-pdump_prepare_client_request(char *device, uint16_t queue,
- uint32_t flags,
- uint16_t operation,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+pdump_prepare_client_request(const char *device, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ uint16_t operation,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
{
int ret = -1;
struct rte_mp_msg mp_req, *mp_rep;
@@ -440,23 +531,23 @@ pdump_prepare_client_request(char *device, uint16_t queue,
struct pdump_request *req = (struct pdump_request *)mp_req.param;
struct pdump_response *resp;
- req->ver = 1;
- req->flags = flags;
+ memset(req, 0, sizeof(*req));
+ if (flags & RTE_PDUMP_FLAG_PCAPNG)
+ req->ver = PDUMP_CLIENT_PCAPNG;
+ else
+ req->ver = PDUMP_CLIENT_LEGACY;
+
+ req->flags = flags & RTE_PDUMP_FLAG_RXTX;
req->op = operation;
+ req->queue = queue;
+ strlcpy(req->device, device, sizeof(req->device));
+
if ((operation & ENABLE) != 0) {
- strlcpy(req->data.en_v1.device, device,
- sizeof(req->data.en_v1.device));
- req->data.en_v1.queue = queue;
- req->data.en_v1.ring = ring;
- req->data.en_v1.mp = mp;
- req->data.en_v1.filter = filter;
- } else {
- strlcpy(req->data.dis_v1.device, device,
- sizeof(req->data.dis_v1.device));
- req->data.dis_v1.queue = queue;
- req->data.dis_v1.ring = NULL;
- req->data.dis_v1.mp = NULL;
- req->data.dis_v1.filter = NULL;
+ req->queue = queue;
+ req->ring = ring;
+ req->mp = mp;
+ req->prm = prm;
+ req->snaplen = snaplen;
}
strlcpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
@@ -477,11 +568,17 @@ pdump_prepare_client_request(char *device, uint16_t queue,
return ret;
}
-int
-rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+/*
+ * There are two versions of this function, because although original API
+ * left place holder for future filter, it never checked the value.
+ * Therefore the API can't depend on application passing a non
+ * bogus value.
+ */
+static int
+pdump_enable(uint16_t port, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
{
int ret;
char name[RTE_DEV_NAME_MAX_LEN];
@@ -496,20 +593,42 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(name, queue, flags,
- ENABLE, ring, mp, filter);
+ if (snaplen == 0)
+ snaplen = UINT32_MAX;
- return ret;
+ return pdump_prepare_client_request(name, queue, flags, snaplen,
+ ENABLE, ring, mp, prm);
}
int
-rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
- uint32_t flags,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ void *filter __rte_unused)
{
- int ret = 0;
+ return pdump_enable(port, queue, flags, 0,
+ ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf(uint16_t port, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ return pdump_enable(port, queue, flags, snaplen,
+ ring, mp, prm);
+}
+
+static int
+pdump_enable_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ int ret;
ret = pdump_validate_ring_mp(ring, mp);
if (ret < 0)
@@ -518,10 +637,30 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(device_id, queue, flags,
- ENABLE, ring, mp, filter);
+ return pdump_prepare_client_request(device_id, queue, flags, snaplen,
+ ENABLE, ring, mp, prm);
+}
- return ret;
+int
+rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
+ uint32_t flags,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ void *filter __rte_unused)
+{
+ return pdump_enable_by_deviceid(device_id, queue, flags, 0,
+ ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ return pdump_enable_by_deviceid(device_id, queue, flags, snaplen,
+ ring, mp, prm);
}
int
@@ -537,8 +676,8 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags)
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(name, queue, flags,
- DISABLE, NULL, NULL, NULL);
+ ret = pdump_prepare_client_request(name, queue, flags, 0,
+ DISABLE, NULL, NULL, NULL);
return ret;
}
@@ -553,8 +692,66 @@ rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(device_id, queue, flags,
- DISABLE, NULL, NULL, NULL);
+ ret = pdump_prepare_client_request(device_id, queue, flags, 0,
+ DISABLE, NULL, NULL, NULL);
return ret;
}
+
+static void
+pdump_sum_stats(uint16_t port, uint16_t nq,
+ const struct rte_pdump_stats stats[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
+ struct rte_pdump_stats *total)
+{
+ uint64_t *sum = (uint64_t *)total;
+ unsigned int i;
+ uint64_t val;
+ uint16_t qid;
+
+ for (qid = 0; qid < nq; qid++) {
+ const uint64_t *perq = (const uint64_t *)&stats[port][qid];
+
+ for (i = 0; i < sizeof(*total) / sizeof(uint64_t); i++) {
+ val = __atomic_load_n(&perq[i], __ATOMIC_RELAXED);
+ sum[i] += val;
+ }
+ }
+}
+
+int
+rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats)
+{
+ struct rte_eth_dev_info dev_info;
+ const struct rte_memzone *mz;
+ int ret;
+
+ memset(stats, 0, sizeof(*stats));
+ ret = rte_eth_dev_info_get(port, &dev_info);
+ if (ret != 0) {
+ PDUMP_LOG(ERR,
+ "Error during getting device (port %u) info: %s\n",
+ port, strerror(-ret));
+ return ret;
+ }
+
+ if (pdump_stats == NULL) {
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ PDUMP_LOG(ERR,
+ "pdump not initialized\n");
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ mz = rte_memzone_lookup(MZ_RTE_PDUMP_STATS);
+ if (mz == NULL) {
+ PDUMP_LOG(ERR, "can not find pdump stats\n");
+ rte_errno = EINVAL;
+ return -1;
+ }
+ pdump_stats = mz->addr;
+ }
+
+ pdump_sum_stats(port, dev_info.nb_rx_queues, pdump_stats->rx, stats);
+ pdump_sum_stats(port, dev_info.nb_tx_queues, pdump_stats->tx, stats);
+ return 0;
+}
diff --git a/lib/pdump/rte_pdump.h b/lib/pdump/rte_pdump.h
index 6b00fc17aeb2..e2fbd78c6273 100644
--- a/lib/pdump/rte_pdump.h
+++ b/lib/pdump/rte_pdump.h
@@ -15,6 +15,7 @@
#include <stdint.h>
#include <rte_mempool.h>
#include <rte_ring.h>
+#include <rte_bpf.h>
#ifdef __cplusplus
extern "C" {
@@ -26,7 +27,9 @@ enum {
RTE_PDUMP_FLAG_RX = 1, /* receive direction */
RTE_PDUMP_FLAG_TX = 2, /* transmit direction */
/* both receive and transmit directions */
- RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX)
+ RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX),
+
+ RTE_PDUMP_FLAG_PCAPNG = 4, /* format for pcapng */
};
/**
@@ -68,7 +71,7 @@ rte_pdump_uninit(void);
* @param mp
* mempool on to which original packets will be mirrored or duplicated.
* @param filter
- * place holder for packet filtering.
+ * Unused should be NULL.
*
* @return
* 0 on success, -1 on error, rte_errno is set accordingly.
@@ -80,6 +83,41 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
struct rte_mempool *mp,
void *filter);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given port and queue with filtering.
+ *
+ * @param port_id
+ * The Ethernet port on which packet capturing should be enabled.
+ * @param queue
+ * The queue on the Ethernet port which packet capturing
+ * should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ * queues of a given port.
+ * @param flags
+ * Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ * The upper limit on bytes to copy.
+ * Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ * The ring on which captured packets will be enqueued for user.
+ * @param mp
+ * The mempool on to which original packets will be mirrored or duplicated.
+ * @param prm
+ * Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ * 0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf(uint16_t port_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm);
+
/**
* Disables packet capturing on given port and queue.
*
@@ -118,7 +156,7 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags);
* @param mp
* mempool on to which original packets will be mirrored or duplicated.
* @param filter
- * place holder for packet filtering.
+ * unused should be NULL
*
* @return
* 0 on success, -1 on error, rte_errno is set accordingly.
@@ -131,6 +169,43 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
struct rte_mempool *mp,
void *filter);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given device id and queue with filtering.
+ * device_id can be name or pci address of device.
+ *
+ * @param device_id
+ * device id on which packet capturing should be enabled.
+ * @param queue
+ * The queue on the Ethernet port which packet capturing
+ * should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ * queues of a given port.
+ * @param flags
+ * Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ * The upper limit on bytes to copy.
+ * Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ * The ring on which captured packets will be enqueued for user.
+ * @param mp
+ * The mempool on to which original packets will be mirrored or duplicated.
+ * @param filter
+ * Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ * 0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm);
+
+
/**
* Disables packet capturing on given device_id and queue.
* device_id can be name or pci address of device.
@@ -153,6 +228,35 @@ int
rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
uint32_t flags);
+
+/**
+ * A structure used to retrieve statistics from packet capture.
+ * The statistics are sum of both receive and transmit queues.
+ */
+struct rte_pdump_stats {
+ uint64_t accepted; /**< Number of packets accepted by filter. */
+ uint64_t filtered; /**< Number of packets rejected by filter. */
+ uint64_t nombuf; /**< Number of mbuf allocation failures. */
+ uint64_t ringfull; /**< Number of missed packets due to ring full. */
+
+ uint64_t reserved[4]; /**< Reserved and pad to cache line */
+};
+
+/**
+ * Retrieve the packet capture statistics for a queue.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param stats
+ * A pointer to structure of type *rte_pdump_stats* to be filled in.
+ * @return
+ * Zero if successful. -1 on error and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_pdump_stats(uint16_t port_id, struct rte_pdump_stats *stats);
+
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/pdump/version.map b/lib/pdump/version.map
index f0a9d12c9a9e..ce5502d9cdf4 100644
--- a/lib/pdump/version.map
+++ b/lib/pdump/version.map
@@ -10,3 +10,11 @@ DPDK_22 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ rte_pdump_enable_bpf;
+ rte_pdump_enable_bpf_by_deviceid;
+ rte_pdump_stats;
+};
--
2.30.2
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH v4 2/4] cryptodev: change valid dev API
2021-09-07 19:22 3% ` [dpdk-dev] [PATCH v4 2/4] cryptodev: change valid dev API Akhil Goyal
@ 2021-09-08 15:16 0% ` Akhil Goyal
0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-09-08 15:16 UTC (permalink / raw)
To: Akhil Goyal, dev, Ray Kinsella, techboard
Cc: Anoob Joseph, radu.nicolau, declan.doherty, hemant.agrawal,
matan, konstantin.ananyev, thomas, roy.fan.zhang, asomalap,
ruifeng.wang, ajit.khaparde, pablo.de.lara.guarch, fiona.trahe,
Ankur Dwivedi, Michael Shamis, Nagadheeraj Rottela, jianjay.zhou
> Subject: [PATCH v4 2/4] cryptodev: change valid dev API
>
> The API rte_cryptodev_pmd_is_valid_dev, can be used
> by the application as well as PMD to check whether
> the device is valid or not. Hence, _pmd is removed
> from the API.
> The applications and drivers which use this API are
> also updated.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
This patch is merged on dpdk-next-crypto.
As discussed in DPDK techboard meeting, adding Ray and Techboard to
take a better look so that changes(if any) can be made before merging to main.
> ---
> doc/guides/rel_notes/release_21_11.rst | 4 ++
> .../net/softnic/rte_eth_softnic_cryptodev.c | 2 +-
> examples/fips_validation/main.c | 2 +-
> examples/ip_pipeline/cryptodev.c | 3 +-
> lib/cryptodev/rte_cryptodev.c | 50 +++++++++----------
> lib/cryptodev/rte_cryptodev.h | 11 ++++
> lib/cryptodev/rte_cryptodev_pmd.h | 11 ----
> lib/cryptodev/version.map | 2 +-
> lib/eventdev/rte_event_crypto_adapter.c | 4 +-
> lib/eventdev/rte_eventdev.c | 2 +-
> lib/pipeline/rte_table_action.c | 2 +-
> 11 files changed, 48 insertions(+), 45 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index 411fa9530a..e7ad50ba09 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -102,6 +102,10 @@ API Changes
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* cryptodev: The API rte_cryptodev_pmd_is_valid_dev is modified to
> + rte_cryptodev_is_valid_dev as it can be used by the application as
> + well as PMD to check whether the device is valid or not.
> +
>
> ABI Changes
> -----------
> diff --git a/drivers/net/softnic/rte_eth_softnic_cryptodev.c
> b/drivers/net/softnic/rte_eth_softnic_cryptodev.c
> index a1a4ca5650..8e278801c5 100644
> --- a/drivers/net/softnic/rte_eth_softnic_cryptodev.c
> +++ b/drivers/net/softnic/rte_eth_softnic_cryptodev.c
> @@ -82,7 +82,7 @@ softnic_cryptodev_create(struct pmd_internals *p,
>
> dev_id = (uint32_t)status;
> } else {
> - if (rte_cryptodev_pmd_is_valid_dev(params->dev_id) == 0)
> + if (rte_cryptodev_is_valid_dev(params->dev_id) == 0)
> return NULL;
>
> dev_id = params->dev_id;
> diff --git a/examples/fips_validation/main.c
> b/examples/fips_validation/main.c
> index c175fe6ac2..e892078f0e 100644
> --- a/examples/fips_validation/main.c
> +++ b/examples/fips_validation/main.c
> @@ -196,7 +196,7 @@ parse_cryptodev_id_arg(char *arg)
> }
>
>
> - if (!rte_cryptodev_pmd_is_valid_dev(cryptodev_id)) {
> + if (!rte_cryptodev_is_valid_dev(cryptodev_id)) {
> RTE_LOG(ERR, USER1, "Error %i: invalid cryptodev id %s\n",
> cryptodev_id, arg);
> return -1;
> diff --git a/examples/ip_pipeline/cryptodev.c
> b/examples/ip_pipeline/cryptodev.c
> index b0d9f3d217..9997d97456 100644
> --- a/examples/ip_pipeline/cryptodev.c
> +++ b/examples/ip_pipeline/cryptodev.c
> @@ -6,7 +6,6 @@
> #include <stdio.h>
>
> #include <rte_cryptodev.h>
> -#include <rte_cryptodev_pmd.h>
> #include <rte_string_fns.h>
>
> #include "cryptodev.h"
> @@ -74,7 +73,7 @@ cryptodev_create(const char *name, struct
> cryptodev_params *params)
>
> dev_id = (uint32_t)status;
> } else {
> - if (rte_cryptodev_pmd_is_valid_dev(params->dev_id) == 0)
> + if (rte_cryptodev_is_valid_dev(params->dev_id) == 0)
> return NULL;
>
> dev_id = params->dev_id;
> diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
> index 447aa9d519..37502b9b3c 100644
> --- a/lib/cryptodev/rte_cryptodev.c
> +++ b/lib/cryptodev/rte_cryptodev.c
> @@ -663,7 +663,7 @@ rte_cryptodev_is_valid_device_data(uint8_t dev_id)
> }
>
> unsigned int
> -rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id)
> +rte_cryptodev_is_valid_dev(uint8_t dev_id)
> {
> struct rte_cryptodev *dev = NULL;
>
> @@ -761,7 +761,7 @@ rte_cryptodev_socket_id(uint8_t dev_id)
> {
> struct rte_cryptodev *dev;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
> + if (!rte_cryptodev_is_valid_dev(dev_id))
> return -1;
>
> dev = rte_cryptodev_pmd_get_dev(dev_id);
> @@ -1032,7 +1032,7 @@ rte_cryptodev_configure(uint8_t dev_id, struct
> rte_cryptodev_config *config)
> struct rte_cryptodev *dev;
> int diag;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
> return -EINVAL;
> }
> @@ -1080,7 +1080,7 @@ rte_cryptodev_start(uint8_t dev_id)
>
> CDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
> return -EINVAL;
> }
> @@ -1110,7 +1110,7 @@ rte_cryptodev_stop(uint8_t dev_id)
> {
> struct rte_cryptodev *dev;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
> return;
> }
> @@ -1136,7 +1136,7 @@ rte_cryptodev_close(uint8_t dev_id)
> struct rte_cryptodev *dev;
> int retval;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
> return -1;
> }
> @@ -1176,7 +1176,7 @@ rte_cryptodev_get_qp_status(uint8_t dev_id,
> uint16_t queue_pair_id)
> {
> struct rte_cryptodev *dev;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
> return -EINVAL;
> }
> @@ -1207,7 +1207,7 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id,
> uint16_t queue_pair_id,
> {
> struct rte_cryptodev *dev;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
> return -EINVAL;
> }
> @@ -1283,7 +1283,7 @@ rte_cryptodev_add_enq_callback(uint8_t dev_id,
> return NULL;
> }
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> rte_errno = ENODEV;
> return NULL;
> @@ -1349,7 +1349,7 @@ rte_cryptodev_remove_enq_callback(uint8_t
> dev_id,
> return -EINVAL;
> }
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> return -ENODEV;
> }
> @@ -1418,7 +1418,7 @@ rte_cryptodev_add_deq_callback(uint8_t dev_id,
> return NULL;
> }
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> rte_errno = ENODEV;
> return NULL;
> @@ -1484,7 +1484,7 @@ rte_cryptodev_remove_deq_callback(uint8_t
> dev_id,
> return -EINVAL;
> }
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> return -ENODEV;
> }
> @@ -1542,7 +1542,7 @@ rte_cryptodev_stats_get(uint8_t dev_id, struct
> rte_cryptodev_stats *stats)
> {
> struct rte_cryptodev *dev;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> return -ENODEV;
> }
> @@ -1565,7 +1565,7 @@ rte_cryptodev_stats_reset(uint8_t dev_id)
> {
> struct rte_cryptodev *dev;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
> return;
> }
> @@ -1581,7 +1581,7 @@ rte_cryptodev_info_get(uint8_t dev_id, struct
> rte_cryptodev_info *dev_info)
> {
> struct rte_cryptodev *dev;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> return;
> }
> @@ -1608,7 +1608,7 @@ rte_cryptodev_callback_register(uint8_t dev_id,
> if (!cb_fn)
> return -EINVAL;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
> return -EINVAL;
> }
> @@ -1652,7 +1652,7 @@ rte_cryptodev_callback_unregister(uint8_t dev_id,
> if (!cb_fn)
> return -EINVAL;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
> return -EINVAL;
> }
> @@ -1720,7 +1720,7 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
> uint8_t index;
> int ret;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
> return -EINVAL;
> }
> @@ -1765,7 +1765,7 @@ rte_cryptodev_asym_session_init(uint8_t dev_id,
> uint8_t index;
> int ret;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
> return -EINVAL;
> }
> @@ -1939,7 +1939,7 @@ rte_cryptodev_sym_session_clear(uint8_t dev_id,
> struct rte_cryptodev *dev;
> uint8_t driver_id;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
> return -EINVAL;
> }
> @@ -1969,7 +1969,7 @@ rte_cryptodev_asym_session_clear(uint8_t dev_id,
> {
> struct rte_cryptodev *dev;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
> return -EINVAL;
> }
> @@ -2079,7 +2079,7 @@
> rte_cryptodev_sym_get_private_session_size(uint8_t dev_id)
> struct rte_cryptodev *dev;
> unsigned int priv_sess_size;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
> + if (!rte_cryptodev_is_valid_dev(dev_id))
> return 0;
>
> dev = rte_cryptodev_pmd_get_dev(dev_id);
> @@ -2099,7 +2099,7 @@
> rte_cryptodev_asym_get_private_session_size(uint8_t dev_id)
> unsigned int header_size = sizeof(void *) * nb_drivers;
> unsigned int priv_sess_size;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
> + if (!rte_cryptodev_is_valid_dev(dev_id))
> return 0;
>
> dev = rte_cryptodev_pmd_get_dev(dev_id);
> @@ -2156,7 +2156,7 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t
> dev_id,
> {
> struct rte_cryptodev *dev;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + if (!rte_cryptodev_is_valid_dev(dev_id)) {
> sym_crypto_fill_status(vec, EINVAL);
> return 0;
> }
> @@ -2179,7 +2179,7 @@ rte_cryptodev_get_raw_dp_ctx_size(uint8_t
> dev_id)
> int32_t size = sizeof(struct rte_crypto_raw_dp_ctx);
> int32_t priv_size;
>
> - if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
> + if (!rte_cryptodev_is_valid_dev(dev_id))
> return -EINVAL;
>
> dev = rte_cryptodev_pmd_get_dev(dev_id);
> diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
> index 11f4e6fdbf..bb01f0f195 100644
> --- a/lib/cryptodev/rte_cryptodev.h
> +++ b/lib/cryptodev/rte_cryptodev.h
> @@ -1368,6 +1368,17 @@ __rte_experimental
> unsigned int
> rte_cryptodev_asym_get_private_session_size(uint8_t dev_id);
>
> +/**
> + * Validate if the crypto device index is valid attached crypto device.
> + *
> + * @param dev_id Crypto device index.
> + *
> + * @return
> + * - If the device index is valid (1) or not (0).
> + */
> +unsigned int
> +rte_cryptodev_is_valid_dev(uint8_t dev_id);
> +
> /**
> * Provide driver identifier.
> *
> diff --git a/lib/cryptodev/rte_cryptodev_pmd.h
> b/lib/cryptodev/rte_cryptodev_pmd.h
> index 1274436870..dd2a4940a2 100644
> --- a/lib/cryptodev/rte_cryptodev_pmd.h
> +++ b/lib/cryptodev/rte_cryptodev_pmd.h
> @@ -94,17 +94,6 @@ rte_cryptodev_pmd_get_dev(uint8_t dev_id);
> struct rte_cryptodev *
> rte_cryptodev_pmd_get_named_dev(const char *name);
>
> -/**
> - * Validate if the crypto device index is valid attached crypto device.
> - *
> - * @param dev_id Crypto device index.
> - *
> - * @return
> - * - If the device index is valid (1) or not (0).
> - */
> -unsigned int
> -rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id);
> -
> /**
> * The pool of rte_cryptodev structures.
> */
> diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
> index 979d823a7c..1a7f759c57 100644
> --- a/lib/cryptodev/version.map
> +++ b/lib/cryptodev/version.map
> @@ -25,6 +25,7 @@ DPDK_22 {
> rte_cryptodev_get_feature_name;
> rte_cryptodev_get_sec_ctx;
> rte_cryptodev_info_get;
> + rte_cryptodev_is_valid_dev;
> rte_cryptodev_name_get;
> rte_cryptodev_pmd_allocate;
> rte_cryptodev_pmd_callback_process;
> @@ -33,7 +34,6 @@ DPDK_22 {
> rte_cryptodev_pmd_destroy;
> rte_cryptodev_pmd_get_dev;
> rte_cryptodev_pmd_get_named_dev;
> - rte_cryptodev_pmd_is_valid_dev;
> rte_cryptodev_pmd_parse_input_args;
> rte_cryptodev_pmd_release_device;
> rte_cryptodev_queue_pair_count;
> diff --git a/lib/eventdev/rte_event_crypto_adapter.c
> b/lib/eventdev/rte_event_crypto_adapter.c
> index e1d38d383d..2d38389858 100644
> --- a/lib/eventdev/rte_event_crypto_adapter.c
> +++ b/lib/eventdev/rte_event_crypto_adapter.c
> @@ -781,7 +781,7 @@ rte_event_crypto_adapter_queue_pair_add(uint8_t
> id,
>
> EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
>
> - if (!rte_cryptodev_pmd_is_valid_dev(cdev_id)) {
> + if (!rte_cryptodev_is_valid_dev(cdev_id)) {
> RTE_EDEV_LOG_ERR("Invalid dev_id=%" PRIu8, cdev_id);
> return -EINVAL;
> }
> @@ -898,7 +898,7 @@ rte_event_crypto_adapter_queue_pair_del(uint8_t
> id, uint8_t cdev_id,
>
> EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
>
> - if (!rte_cryptodev_pmd_is_valid_dev(cdev_id)) {
> + if (!rte_cryptodev_is_valid_dev(cdev_id)) {
> RTE_EDEV_LOG_ERR("Invalid dev_id=%" PRIu8, cdev_id);
> return -EINVAL;
> }
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index 594dd5e759..cb0ed7b620 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -165,7 +165,7 @@ rte_event_crypto_adapter_caps_get(uint8_t dev_id,
> uint8_t cdev_id,
> struct rte_cryptodev *cdev;
>
> RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> - if (!rte_cryptodev_pmd_is_valid_dev(cdev_id))
> + if (!rte_cryptodev_is_valid_dev(cdev_id))
> return -EINVAL;
>
> dev = &rte_eventdevs[dev_id];
> diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
> index 98f3438774..54721ed96a 100644
> --- a/lib/pipeline/rte_table_action.c
> +++ b/lib/pipeline/rte_table_action.c
> @@ -1732,7 +1732,7 @@ struct sym_crypto_data {
> static int
> sym_crypto_cfg_check(struct rte_table_action_sym_crypto_config *cfg)
> {
> - if (!rte_cryptodev_pmd_is_valid_dev(cfg->cryptodev_id))
> + if (!rte_cryptodev_is_valid_dev(cfg->cryptodev_id))
> return -EINVAL;
> if (cfg->mp_create == NULL || cfg->mp_init == NULL)
> return -EINVAL;
> --
> 2.25.1
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v12 3/4] maintainers: add new abi scripts
2021-09-08 15:13 3% ` [dpdk-dev] [PATCH v12 0/4] devtools: scripts to count and track symbols Ray Kinsella
2021-09-08 15:13 5% ` [dpdk-dev] [PATCH v12 1/4] devtools: script to track symbols over releases Ray Kinsella
2021-09-08 15:13 5% ` [dpdk-dev] [PATCH v12 2/4] devtools: script to send notifications of expired symbols Ray Kinsella
@ 2021-09-08 15:13 17% ` Ray Kinsella
2 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-09-08 15:13 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr,
aconole, roy.fan.zhang, arkadiuszx.kusztal, gakhil
Add new abi management scripts to the MAINTAINERS file.
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
MAINTAINERS | 2 ++
1 file changed, 2 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 266f5ac1da..ff8245271f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -129,6 +129,8 @@ F: devtools/gen-abi.sh
F: devtools/libabigail.abignore
F: devtools/update-abi.sh
F: devtools/update_version_map_abi.py
+F: devtools/notify-symbol-maintainers.py
+F: devtools/symbol-tool.py
F: buildtools/check-symbols.sh
F: buildtools/map-list-symbol.sh
F: drivers/*/*/*.map
--
2.26.2
^ permalink raw reply [relevance 17%]
* [dpdk-dev] [PATCH v12 2/4] devtools: script to send notifications of expired symbols
2021-09-08 15:13 3% ` [dpdk-dev] [PATCH v12 0/4] devtools: scripts to count and track symbols Ray Kinsella
2021-09-08 15:13 5% ` [dpdk-dev] [PATCH v12 1/4] devtools: script to track symbols over releases Ray Kinsella
@ 2021-09-08 15:13 5% ` Ray Kinsella
2021-09-08 15:13 17% ` [dpdk-dev] [PATCH v12 3/4] maintainers: add new abi scripts Ray Kinsella
2 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-09-08 15:13 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr,
aconole, roy.fan.zhang, arkadiuszx.kusztal, gakhil
Use this script with the output of the DPDK symbol tool, to notify
maintainers of expired symbols by email. You need to define the environment
variable DPDK_GETMAINTAINER_PATH for this tool to work.
Use terminal output to review the emails before sending.
e.g.
$ devtools/symbol-tool.py list-expired --format-output csv \
| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
devtools/notify_expired_symbols.py --format-output terminal
Then use email output to send the emails to the maintainers.
e.g.
$ devtools/symbol-tool.py list-expired --format-output csv \
| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
devtools/notify_expired_symbols.py --format-output email \
--smtp-server <server> --sender <someone@somewhere.com> \
--password <password> --cc <someone@somewhere.com>
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
devtools/notify-symbol-maintainers.py | 302 ++++++++++++++++++++++++++
1 file changed, 302 insertions(+)
create mode 100755 devtools/notify-symbol-maintainers.py
diff --git a/devtools/notify-symbol-maintainers.py b/devtools/notify-symbol-maintainers.py
new file mode 100755
index 0000000000..edf330f88b
--- /dev/null
+++ b/devtools/notify-symbol-maintainers.py
@@ -0,0 +1,302 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+# pylint: disable=invalid-name
+'''Tool to notify maintainers of expired symbols'''
+import os
+import smtplib
+import ssl
+import sys
+import subprocess
+import argparse
+from argparse import RawTextHelpFormatter
+import time
+from email.message import EmailMessage
+from pathlib import Path
+
+DESCRIPTION = '''
+Use this script with the output of the DPDK symbol tool, to notify maintainers
+and contributors of expired symbols by email. You need to define the environment
+variable DPDK_GETMAINTAINER_PATH for this tool to work.
+
+Use terminal output to review the emails before sending.
+e.g.
+$ devtools/symbol-tool.py list-expired --format-output csv \\
+| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
+{s} --format-output terminal
+
+Then use email output to send the emails to the maintainers.
+e.g.
+$ devtools/symbol-tool.py list-expired --format-output csv \\
+| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
+{s} --format-output email \\
+--smtp-server <server> --sender <someone@somewhere.com> --password <password> \\
+--cc <someone@somewhere.com>
+''' # noqa: E501
+
+EMAIL_TEMPLATE = '''Hi there,
+
+Please note the symbols listed below have expired. In line with the DPDK ABI
+policy, they should be scheduled for removal, in the next DPDK release.
+
+For more information, please see the DPDK ABI Policy, section 3.5.3.
+https://doc.dpdk.org/guides/contributing/abi_policy.html
+
+Thanks,
+
+The DPDK Symbol Bot
+
+''' # noqa: E501
+
+ABI_POLICY = 'doc/guides/contributing/abi_policy.rst'
+DPDK_GMP_ENV_VAR = 'DPDK_GETMAINTAINER_PATH'
+MAINTAINERS = 'MAINTAINERS'
+get_maintainer = ['devtools/get-maintainer.sh',
+ '--email', '-f']
+
+
+class EnvironException(Exception):
+ '''Subclass exception for Pylint\'s happiness.'''
+
+
+def _die_on_exception(e):
+ '''Print an exception, and quit'''
+
+ print('Fatal Error: ' + str(e))
+ sys.exit()
+
+
+def _check_get_maintainers_env():
+ '''Check get maintainers scripts are setup'''
+
+ if not Path(get_maintainer[0]).is_file():
+ raise EnvironException('Cannot locate DPDK\'s get maintainers script, '
+ ' usually at $' + get_maintainer[0] + '.')
+
+ if DPDK_GMP_ENV_VAR not in os.environ:
+ raise EnvironException(DPDK_GMP_ENV_VAR + ' is not defined.')
+
+ if not Path(os.environ[DPDK_GMP_ENV_VAR]).is_file():
+ raise EnvironException('Cannot locate get maintainers script, usually'
+ ' at ' + DPDK_GMP_ENV_VAR + '.')
+
+
+def _get_maintainers(libpath):
+ '''Get the maintainers for given library'''
+
+ try:
+ _check_get_maintainers_env()
+ except EnvironException as e:
+ _die_on_exception(e)
+
+ try:
+ cmd = get_maintainer + [libpath]
+ result = subprocess.run(cmd,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError as e:
+ _die_on_exception(e)
+
+ if result is None:
+ return None
+
+ email = result.stdout.decode('utf-8')
+ if email == '':
+ return None
+
+ email = list(filter(None, email.split('\n')))
+ return email
+
+
+default_maintainers = _get_maintainers(ABI_POLICY) + \
+ _get_maintainers(MAINTAINERS)
+
+
+def get_maintainers(libpath):
+ '''Get the maintainers for given library'''
+ maintainers = _get_maintainers(libpath)
+
+ if maintainers is None:
+ maintainers = default_maintainers
+
+ return maintainers
+
+
+def get_message(library, symbols, config):
+ '''Build email message from symbols, config and maintainers'''
+ contributors = {}
+ message = {}
+ maintainers = get_maintainers(library)
+
+ if maintainers != default_maintainers:
+ message['CC'] = default_maintainers.copy()
+
+ if 'CC' in config:
+ message.setdefault('CC', []).append(config['CC'])
+
+ message['Subject'] = 'Expired symbols in {}\n'.format(library)
+
+ body = EMAIL_TEMPLATE
+ body += '{:<50}{:<25}{:<25}\n'.format('Symbol', 'Contributor', 'Email')
+ for sym in symbols:
+ body += ('{:<50}{:<25}{:<25}\n'.format(sym,
+ symbols[sym]['name'],
+ symbols[sym]['email']))
+ email = symbols[sym]['email']
+ contributors[email] = ''
+
+ contributors = list(contributors.keys())
+
+ message['To'] = maintainers + contributors
+ message['Body'] = body
+
+ return message
+
+
+class OutputEmail():
+ '''Format the output for email'''
+
+ def __init__(self, config):
+ self.config = config
+
+ self.terminal = OutputTerminal(config)
+ context = ssl.create_default_context()
+
+ # Try to log in to server and send email
+ try:
+ self.server = smtplib.SMTP(config['smtp_server'], 587)
+ self.server.starttls(context=context) # Secure the connection
+ self.server.login(config['sender'], config['password'])
+ except EnvironException as e:
+ _die_on_exception(e)
+
+ def message(self, message):
+ '''send email'''
+ self.terminal.message(message)
+
+ msg = EmailMessage()
+ msg.set_content(message.pop('Body'))
+
+ for key in message.keys():
+ msg[key] = message[key]
+
+ msg['From'] = self.config['sender']
+ msg['Reply-To'] = 'no-reply@dpdk.org'
+
+ self.server.send_message(msg)
+
+ time.sleep(1)
+
+ def __del__(self):
+ self.server.quit()
+
+
+class OutputTerminal(): # pylint: disable=too-few-public-methods
+ '''Format the output for the terminal'''
+
+ def __init__(self, config):
+ self.config = config
+
+ def message(self, message):
+ '''Print email to terminal'''
+
+ terminal = 'To:' + ', '.join(message['To']) + '\n'
+ if 'sender' in self.config.keys():
+ terminal += 'From:' + self.config['sender'] + '\n'
+
+ terminal += 'Reply-To:' + 'no-reply@dpdk.org' + '\n'
+
+ if 'CC' in message:
+ terminal += 'CC:' + ', '.join(message['CC']) + '\n'
+
+ terminal += 'Subject:' + message['Subject'] + '\n'
+ terminal += 'Body:' + message['Body'] + '\n'
+
+ print(terminal)
+ print('-' * 80)
+
+
+def parse_config(args):
+ '''put the command line args in the right places'''
+ config = {}
+ error_msg = None
+
+ outputs = {
+ None: OutputTerminal,
+ 'terminal': OutputTerminal,
+ 'email': OutputEmail
+ }
+
+ if args.format_output == 'email':
+ if args.smtp_server is None:
+ error_msg = 'SMTP server'
+ else:
+ config['smtp_server'] = args.smtp_server
+
+ if args.sender is None:
+ error_msg = 'sender'
+ else:
+ config['sender'] = args.sender
+
+ if args.password is None:
+ error_msg = 'password'
+ else:
+ config['password'] = args.password
+
+ if args.cc is not None:
+ config['CC'] = args.cc
+
+ if error_msg is not None:
+ print('Please specify a {} for email output'.format(error_msg))
+ return None
+
+ config['output'] = outputs[args.format_output]
+ return config
+
+
+def main():
+ '''Main entry point'''
+ parser = \
+ argparse.ArgumentParser(description=DESCRIPTION.format(s=__file__),
+ formatter_class=RawTextHelpFormatter)
+ parser.add_argument('--format-output',
+ choices=['terminal', 'email'],
+ default='terminal')
+ parser.add_argument('--smtp-server')
+ parser.add_argument('--password')
+ parser.add_argument('--sender')
+ parser.add_argument('--cc')
+
+ args = parser.parse_args()
+ config = parse_config(args)
+ if config is None:
+ return
+
+ symbols = {}
+ lastlib = library = ''
+
+ output = config['output'](config)
+
+ for line in sys.stdin:
+ line = line.rstrip('\n')
+
+ if line.find('mapfile') >= 0:
+ continue
+ library, symbol, name, email = line.split(',')
+
+ if library != lastlib:
+ message = get_message(lastlib, symbols, config)
+ output.message(message)
+ symbols = {}
+
+ lastlib = library
+ symbols[symbol] = {'name': name, 'email': email}
+
+ # print the last library
+ message = get_message(lastlib, symbols, config)
+ output.message(message)
+
+
+if __name__ == '__main__':
+ main()
--
2.26.2
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v12 1/4] devtools: script to track symbols over releases
2021-09-08 15:13 3% ` [dpdk-dev] [PATCH v12 0/4] devtools: scripts to count and track symbols Ray Kinsella
@ 2021-09-08 15:13 5% ` Ray Kinsella
2021-09-08 15:13 5% ` [dpdk-dev] [PATCH v12 2/4] devtools: script to send notifications of expired symbols Ray Kinsella
2021-09-08 15:13 17% ` [dpdk-dev] [PATCH v12 3/4] maintainers: add new abi scripts Ray Kinsella
2 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-09-08 15:13 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr,
aconole, roy.fan.zhang, arkadiuszx.kusztal, gakhil
This script tracks the growth of stable and experimental symbols
over releases since v19.11. The script has the ability to
count the added symbols between two dpdk releases, and to
list experimental symbols present in two dpdk releases
(expired symbols).
example usages:
Count symbols added since v19.11
$ devtools/symbol-tool.py count-symbols
Count symbols added since v20.11
$ devtools/symbol-tool.py count-symbols --releases v20.11,v21.05
List experimental symbols present in v20.11 and v21.05
$ devtools/symbol-tool.py list-expired --releases v20.11,v21.05
List experimental symbols in libraries only, present since v19.11
$ devtools/symbol-tool.py list-expired --directory lib
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
devtools/symbol-tool.py | 566 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 566 insertions(+)
create mode 100755 devtools/symbol-tool.py
diff --git a/devtools/symbol-tool.py b/devtools/symbol-tool.py
new file mode 100755
index 0000000000..a71ab59539
--- /dev/null
+++ b/devtools/symbol-tool.py
@@ -0,0 +1,566 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+# pylint: disable=invalid-name
+'''Tool to count or list symbols in each DPDK release'''
+from pathlib import Path
+import sys
+import os
+import subprocess
+import argparse
+from argparse import RawTextHelpFormatter
+import re
+import datetime
+try:
+ from parsley import makeGrammar
+except ImportError:
+ print('This script uses the package Parsley to parse C Mapfiles.\n'
+ 'This can be installed with \"pip install parsley".')
+ sys.exit()
+
+DESCRIPTION = '''
+This script tracks the growth of stable and experimental symbols
+over releases since v19.11. The script has the ability to
+count the added symbols between two dpdk releases, and to
+list experimental symbols present in two dpdk releases
+(expired symbols), including the name & email of the original contributor.
+
+example usages:
+
+Count symbols added since v19.11
+$ {s} count-symbols
+
+Count symbols added since v20.11
+$ {s} count-symbols --releases v20.11,v21.05
+
+List experimental symbols present in v20.11 and v21.05
+$ {s} list-expired --releases v20.11,v21.05
+
+List experimental symbols in libraries only, present since v19.11
+$ {s} list-expired --directory lib
+'''
+
+MAP_GRAMMAR = r"""
+
+ws = (' ' | '\r' | '\n' | '\t')*
+
+ABI_VER = ({})
+DPDK_VER = ('DPDK_' ABI_VER)
+ABI_NAME = ('INTERNAL' | 'EXPERIMENTAL' | DPDK_VER)
+comment = '#' (~'\n' anything)+ '\n'
+symbol = (~(';' | '}}' | '#') anything )+:c ';' -> ''.join(c)
+global = 'global:'
+local = 'local: *;'
+symbols = comment* symbol:s ws comment* -> s
+
+abi = (abi_section+):m -> dict(m)
+abi_section = (ws ABI_NAME:e ws '{{' ws global* (~local ws symbols)*:s ws local* ws '}}' ws DPDK_VER* ';' ws) -> (e,s)
+""" # noqa: E501
+
+
+class EnvironException(Exception):
+ '''Subclass exception for Pylint\'s happiness.'''
+
+
+def get_abi_versions():
+ '''Returns a string of possible dpdk abi versions'''
+
+ year = datetime.date.today().year - 2000
+ tags = " |".join(['\'{}\''.format(i)
+ for i in reversed(range(21, year + 1))])
+ tags = tags + ' | \'20.0.1\' | \'20.0\' | \'20\''
+
+ return tags
+
+
+def get_dpdk_releases():
+ '''Returns a list of dpdk release tags names since v19.11'''
+
+ year = datetime.date.today().year - 2000
+ year_range = "|".join("{}".format(i) for i in range(19, year + 1))
+ pattern = re.compile(r'^\"v(' + year_range + r')\.\d{2}\"$')
+
+ cmd = ['git', 'for-each-ref', '--sort=taggerdate', '--format', '"%(tag)"']
+ try:
+ result = subprocess.run(cmd,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ print("Failed to interogate git for release tags")
+ sys.exit()
+
+ tags = result.stdout.decode('utf-8').split('\n')
+
+ # find the non-rcs between now and v19.11
+ tags = [tag.replace('\"', '')
+ for tag in reversed(tags)
+ if pattern.match(tag)][:-3]
+
+ return tags
+
+
+def fix_directory_name(path):
+ '''Prepend librte to the source directory name'''
+ mapfilepath1 = str(path.parent.name)
+ mapfilepath2 = str(path.parents[1])
+ mapfilepath = mapfilepath2 + '/librte_' + mapfilepath1
+
+ return mapfilepath
+
+
+def directory_renamed(path, rel):
+ '''Fix removal of the librte_ from the directory names'''
+
+ mapfilepath = fix_directory_name(path)
+ tagfile = '{}:{}/{}'.format(rel, mapfilepath, path.name)
+
+ try:
+ result = subprocess.run(['git', 'show', tagfile],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ result = None
+
+ return result
+
+
+def mapfile_renamed(path, rel):
+ '''Fix renaming of the map file'''
+ newfile = None
+
+ result = subprocess.run(['git', 'ls-tree',
+ rel, str(path.parent) + '/'],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ dentries = result.stdout.decode('utf-8')
+ dentries = dentries.split('\n')
+
+ # filter entries looking for the map file
+ dentries = [dentry for dentry in dentries if dentry.endswith('.map')]
+ if len(dentries) > 1 or len(dentries) == 0:
+ return None
+
+ dparts = dentries[0].split('/')
+ newfile = dparts[len(dparts) - 1]
+
+ if newfile is not None:
+ tagfile = '{}:{}/{}'.format(rel, path.parent, newfile)
+
+ try:
+ result = subprocess.run(['git', 'show', tagfile],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ result = None
+
+ else:
+ result = None
+
+ return result
+
+
+def mapfile_and_directory_renamed(path, rel):
+ '''Fix renaming of the map file & the source directory'''
+ mapfilepath = Path("{}/{}".format(fix_directory_name(path), path.name))
+
+ return mapfile_renamed(mapfilepath, rel)
+
+
+FIX_STRATEGIES = [directory_renamed,
+ mapfile_renamed,
+ mapfile_and_directory_renamed]
+
+
+def get_symbols(map_parser, release, mapfile_path):
+ '''Count the symbols for a given release and mapfile'''
+ abi_sections = {}
+
+ tagfile = '{}:{}'.format(release, mapfile_path)
+ try:
+ result = subprocess.run(['git', 'show', tagfile],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ result = None
+
+ for fix_strategy in FIX_STRATEGIES:
+ if result is not None:
+ break
+ result = fix_strategy(mapfile_path, release)
+
+ if result is not None:
+ mapfile = result.stdout.decode('utf-8')
+ abi_sections = map_parser(mapfile).abi()
+
+ return abi_sections
+
+
+def get_terminal_rows():
+ '''Find the number of rows in the terminal'''
+
+ try:
+ return os.get_terminal_size().lines
+ except IOError:
+ return 0
+
+
+class IgnoredSymbols(): # pylint: disable=too-few-public-methods
+ '''Symbols which are to be be ignored for some period'''
+
+ SYMBOL_TOOL_IGNORE = 'devtools/symboltool.ignore'
+ ignore_regex = []
+ __initialized = False
+
+ @staticmethod
+ def initialize():
+ '''intialize once'''
+
+ if IgnoredSymbols.__initialized:
+ return
+ IgnoredSymbols.__initialized = True
+
+ if 'DPDK_SYMBOL_TOOL_IGNORE' in os.environ:
+ IgnoredSymbols.SYMBOL_TOOL_IGNORE = \
+ os.environ['DPDK_SYMBOL_TOOL_IGNORE']
+
+ # if the user specifies an ignore file, we can't find then error.
+ if not Path(IgnoredSymbols.SYMBOL_TOOL_IGNORE).is_file():
+ raise EnvironException('Cannot locate {}\'s '
+ 'ignore file'.format(__file__))
+
+ # if we cannot find the default ignore file, then continue
+ if not Path(IgnoredSymbols.SYMBOL_TOOL_IGNORE).is_file():
+ return
+
+ lines = open(Path(IgnoredSymbols.SYMBOL_TOOL_IGNORE)).readlines()
+ for line in lines:
+
+ line = line.strip()
+
+ # ignore comments and whitespace
+ if line.startswith(';') or len(line) == 0:
+ continue
+
+ IgnoredSymbols.ignore_regex.append(re.compile(line))
+
+ def __init__(self):
+ self.initialize()
+
+ def check_ignore(self, symbol):
+ '''Check symbol against the ignore regexes'''
+
+ for exp in self.ignore_regex:
+ if exp.search(symbol) is not None:
+ return True
+
+ return False
+
+
+class SymbolOwner():
+ '''Find the symbols original contributors name and email'''
+ symbol_regex = {}
+ blame_regex = {'name': r'author\s(.*)',
+ 'email': r'author-mail\s<(.*)>'}
+
+ def __init__(self, libpath, symbol):
+ self.libpath = libpath
+ self.symbol = symbol
+
+ # find variable definitions in C files, and functions in headers.
+ self.symbol_regex = \
+ {'*.c': r'^(?!extern).*' + self.symbol + '[^()]*;',
+ '*.h': r'__rte_experimental(?:.*\n){0,2}.*' + self.symbol}
+
+ def find_symbol_location(self):
+ '''Find where the symbol is definited in the source'''
+ for key in self.symbol_regex:
+ for path in Path(self.libpath).rglob(key):
+ file_text = open(path).read()
+
+ # find where the symbol is defined, either preceded by
+ # rte_experimental tag (functions)
+ # or followed by a ; (variables)
+
+ exp = self.symbol_regex[key]
+ pattern = re.compile(exp, re.MULTILINE)
+ search = pattern.search(file_text)
+
+ if search is not None:
+ symbol_pos = search.span()[1]
+ symbol_line = file_text.count('\n', 0, symbol_pos) + 1
+
+ return [str(path), symbol_line]
+ return None
+
+ def find_symbol_owner(self):
+ '''Find the symbols original contributors name and email'''
+ owners = {}
+ location = self.find_symbol_location()
+
+ if location is None:
+ return None
+
+ line = '-L {},{}'.format(location[1], location[1])
+ # git blame -p(orcelain) -L(ine) path
+ args = ['-p', line, location[0]]
+
+ try:
+ result = subprocess.run(['git', 'blame'] + args,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ return None
+
+ blame = result.stdout.decode('utf-8')
+ for key in self.blame_regex:
+ pattern = re.compile(self.blame_regex[key], re.MULTILINE)
+ match = pattern.search(blame)
+
+ owners[key] = match.groups()[0]
+
+ return owners
+
+
+class SymbolCountOutput():
+ '''Format the output to supported formats'''
+ output_fmt = ""
+ column_fmt = ""
+
+ def __init__(self, format_output, dpdk_releases):
+ self.OUTPUT_FORMATS[format_output](self, dpdk_releases)
+ self.column_titles = ['mapfile'] + dpdk_releases
+
+ self.terminal_rows = get_terminal_rows()
+ self.row = 0
+
+ def set_terminal_output(self, dpdk_rel):
+ '''Set the output format to Tabbed Separated Values'''
+
+ self.output_fmt = '{:<50}' + \
+ ''.join(['{:<6}{:<6}'] * (len(dpdk_rel)))
+ self.column_fmt = '{:50}' + \
+ ''.join(['{:<12}'] * (len(dpdk_rel)))
+
+ def set_csv_output(self, dpdk_rel):
+ '''Set the output format to Comma Separated Values'''
+
+ self.output_fmt = '{},' + \
+ ','.join(['{},{}'] * (len(dpdk_rel)))
+ self.column_fmt = '{},' + \
+ ','.join(['{},'] * (len(dpdk_rel)))
+
+ def print_columns(self):
+ '''Print column rows with release names'''
+ print(self.column_fmt.format(*self.column_titles))
+ self.row += 1
+
+ def print_row(self, mapfile, symbols):
+ '''Print row of symbol values'''
+ mapfile = str(mapfile)
+ print(self.output_fmt.format(*([mapfile] + symbols)))
+ self.row += 1
+
+ if((self.terminal_rows > 0) and
+ ((self.row % self.terminal_rows) == 0)):
+ self.print_columns()
+
+ OUTPUT_FORMATS = {None: set_terminal_output,
+ 'terminal': set_terminal_output,
+ 'csv': set_csv_output}
+
+
+class ListExpiredOutput():
+ '''Format the output to supported formats'''
+ output_fmt = ""
+ column_fmt = ""
+
+ def __init__(self, format_output, dpdk_releases):
+ self.terminal = True
+ self.OUTPUT_FORMATS[format_output](self, dpdk_releases)
+ self.column_titles = ['mapfile'] + \
+ ['expired (' + ','.join(dpdk_releases) + ')'] + \
+ ['contributor name', 'contributor email']
+
+ def set_terminal_output(self, _):
+ '''Set the output format to Tabbed Separated Values'''
+
+ self.output_fmt = '{:<50}{:<50}{:<25}{:<25}'
+ self.column_fmt = '{:50}{:50}{:25}{:25}'
+
+ def set_csv_output(self, _):
+ '''Set the output format to Comma Separated Values'''
+
+ self.output_fmt = '{},{},{},{}'
+ self.column_fmt = '{},{},{},{}'
+ self.terminal = False
+
+ def print_columns(self):
+ '''Print column rows with release names'''
+ print(self.column_fmt.format(*self.column_titles))
+
+ def print_row(self, mapfile, symbols, owner):
+ '''Print row of symbol values'''
+
+ for symbol in symbols:
+ mapfile = str(mapfile)
+ name = owner[symbol]['name'] \
+ if owner[symbol] is not None else ''
+ email = owner[symbol]['email'] \
+ if owner[symbol] is not None else ''
+
+ print(self.output_fmt.format(mapfile, symbol, name, email))
+ if self.terminal:
+ mapfile = ''
+
+ OUTPUT_FORMATS = {None: set_terminal_output,
+ 'terminal': set_terminal_output,
+ 'csv': set_csv_output}
+
+
+class CountSymbolsAction:
+ ''' Logic to count symbols added since a give release '''
+ IGNORE_SECTIONS = ['EXPERIMENTAL', 'INTERNAL']
+
+ def __init__(self, mapfile_path, mapfile_parser, format_output):
+ self.path = mapfile_path
+ self.parser = mapfile_parser
+ self.format_output = format_output
+ self.symbols_count = []
+
+ def add_mapfile(self, release):
+ ''' add a version mapfile '''
+ symbol_count = experimental_count = 0
+
+ symbols = get_symbols(self.parser, release, self.path)
+
+ # which versions are present, and we care about
+ abi_vers = [abi_ver
+ for abi_ver in symbols
+ if abi_ver not in self.IGNORE_SECTIONS]
+
+ for abi_ver in abi_vers:
+ symbol_count += len(symbols[abi_ver])
+
+ # count experimental symbols
+ if 'EXPERIMENTAL' in symbols.keys():
+ experimental_count = len(symbols['EXPERIMENTAL'])
+
+ self.symbols_count += [symbol_count, experimental_count]
+
+ def __del__(self):
+ self.format_output.print_row(self.path.parent, self.symbols_count)
+
+
+class ListExpiredAction:
+ ''' Logic to list expired symbols between two releases '''
+
+ def __init__(self, mapfile_path, mapfile_parser, format_output):
+ self.path = mapfile_path
+ self.parser = mapfile_parser
+ self.format_output = format_output
+ self.experimental_symbols = []
+ self.ignored_symbols = IgnoredSymbols()
+
+ def add_mapfile(self, release):
+ ''' add a version mapfile '''
+ symbols = get_symbols(self.parser, release, self.path)
+
+ if 'EXPERIMENTAL' in symbols.keys():
+ experimental = [exp.strip() for exp in symbols['EXPERIMENTAL']]
+
+ self.experimental_symbols.append(experimental)
+
+ def __del__(self):
+ if len(self.experimental_symbols) != 2:
+ return
+
+ tmp = self.experimental_symbols
+ # find symbols present in both dpdk releases
+ intersect_syms = [sym for sym in tmp[0] if sym in tmp[1]]
+
+ # remove ignored symbols
+ intersect_syms = [sym for sym in intersect_syms if not
+ self.ignored_symbols.check_ignore(sym)]
+
+ # check for empty set
+ if intersect_syms == []:
+ return
+
+ sym_owner = {}
+ for sym in intersect_syms:
+ sym_owner[sym] = \
+ SymbolOwner(self.path.parent, sym).find_symbol_owner()
+
+ self.format_output.print_row(self.path.parent,
+ intersect_syms,
+ sym_owner)
+
+
+SRC_DIRECTORIES = 'drivers,lib'
+
+ACTIONS = {None: CountSymbolsAction,
+ 'count-symbols': CountSymbolsAction,
+ 'list-expired': ListExpiredAction}
+
+ACTION_OUTPUT = {None: SymbolCountOutput,
+ 'count-symbols': SymbolCountOutput,
+ 'list-expired': ListExpiredOutput}
+
+
+def main():
+ '''Main entry point'''
+
+ dpdk_releases = get_dpdk_releases()
+
+ parser = \
+ argparse.ArgumentParser(description=DESCRIPTION.format(s=__file__),
+ formatter_class=RawTextHelpFormatter)
+
+ parser.add_argument('mode', choices=['count-symbols', 'list-expired'])
+ parser.add_argument('--format-output', choices=['terminal', 'csv'],
+ default='terminal')
+ parser.add_argument('--directory', choices=SRC_DIRECTORIES.split(','),
+ default=SRC_DIRECTORIES)
+ parser.add_argument('--releases',
+ help='2 x comma separated release tags e.g. \''
+ + ','.join([dpdk_releases[0], dpdk_releases[-1]])
+ + '\'')
+ args = parser.parse_args()
+
+ if args.releases is not None:
+ dpdk_releases = args.releases.split(',')
+
+ if args.mode == 'list-expired':
+ if len(dpdk_releases) < 2:
+ sys.exit('Please specify two releases to compare '
+ 'in \'list-expired\' mode.')
+ dpdk_releases = [dpdk_releases[0],
+ dpdk_releases[len(dpdk_releases) - 1]]
+
+ action = ACTIONS[args.mode]
+ format_output = ACTION_OUTPUT[args.mode](args.format_output, dpdk_releases)
+
+ map_grammar = MAP_GRAMMAR.format(get_abi_versions())
+ map_parser = makeGrammar(map_grammar, {})
+
+ format_output.print_columns()
+
+ for src_dir in args.directory.split(','):
+ for path in Path(src_dir).rglob('*.map'):
+ release_action = action(path, map_parser, format_output)
+
+ for release in dpdk_releases:
+ release_action.add_mapfile(release)
+
+ # all the magic happens in the destructor
+ del release_action
+
+
+if __name__ == '__main__':
+ main()
--
2.26.2
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v12 0/4] devtools: scripts to count and track symbols
` (2 preceding siblings ...)
2021-09-08 15:12 3% ` [dpdk-dev] [PATCH v11 0/3] devtools: scripts to count and track symbols Ray Kinsella
@ 2021-09-08 15:13 3% ` Ray Kinsella
2021-09-08 15:13 5% ` [dpdk-dev] [PATCH v12 1/4] devtools: script to track symbols over releases Ray Kinsella
` (2 more replies)
2021-09-09 13:48 3% ` [dpdk-dev] [PATCH v13 0/4] devtools: scripts to count and track symbols Ray Kinsella
4 siblings, 3 replies; 200+ results
From: Ray Kinsella @ 2021-09-08 15:13 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr,
aconole, roy.fan.zhang, arkadiuszx.kusztal, gakhil
Scripts to count and track the lifecycle of DPDK symbols.
The symbol-tool script reports on the growth of symbols over releases
and list expired symbols. The notify-symbol-maintainers script
consumes the input from symbol-tool and generates email notifications
of expired symbols.
v2: reworked to fix pylint errors
v3: sent with the correct in-reply-to
v4: fix typos picked up by the CI
v5: fix terminal_size & directory args
v6: added list-expired, to list expired experimental symbols
v7: fix typo in comments
v8: added tool to notify maintainers of expired symbols
v9: removed hardcoded emails addressed and script names
v10: added ability to identify and notify the original contributors
v11: addressed feedback from Aaron Conole, including PEP8 errors.
v12: added symbol-tool ignore functionality, to ignore specific symbols.
Ray Kinsella (4):
devtools: script to track symbols over releases
devtools: script to send notifications of expired symbols
maintainers: add new abi scripts
devtools: add asym crypto to symbol-tool ignore
MAINTAINERS | 2 +
devtools/notify-symbol-maintainers.py | 302 ++++++++++++++
devtools/symbol-tool.py | 566 ++++++++++++++++++++++++++
devtools/symboltool.ignore | 3 +
4 files changed, 873 insertions(+)
create mode 100755 devtools/notify-symbol-maintainers.py
create mode 100755 devtools/symbol-tool.py
create mode 100644 devtools/symboltool.ignore
--
2.26.2
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v11 3/3] maintainers: add new abi scripts
2021-09-08 15:12 3% ` [dpdk-dev] [PATCH v11 0/3] devtools: scripts to count and track symbols Ray Kinsella
2021-09-08 15:12 5% ` [dpdk-dev] [PATCH v11 1/3] devtools: script to track symbols over releases Ray Kinsella
2021-09-08 15:12 5% ` [dpdk-dev] [PATCH v11 2/3] devtools: script to send notifications of expired symbols Ray Kinsella
@ 2021-09-08 15:12 17% ` Ray Kinsella
2 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-09-08 15:12 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr,
aconole, roy.fan.zhang, arkadiuszx.kusztal, gakhil
Add new abi management scripts to the MAINTAINERS file.
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
MAINTAINERS | 2 ++
1 file changed, 2 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 266f5ac1da..ff8245271f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -129,6 +129,8 @@ F: devtools/gen-abi.sh
F: devtools/libabigail.abignore
F: devtools/update-abi.sh
F: devtools/update_version_map_abi.py
+F: devtools/notify-symbol-maintainers.py
+F: devtools/symbol-tool.py
F: buildtools/check-symbols.sh
F: buildtools/map-list-symbol.sh
F: drivers/*/*/*.map
--
2.26.2
^ permalink raw reply [relevance 17%]
* [dpdk-dev] [PATCH v11 2/3] devtools: script to send notifications of expired symbols
2021-09-08 15:12 3% ` [dpdk-dev] [PATCH v11 0/3] devtools: scripts to count and track symbols Ray Kinsella
2021-09-08 15:12 5% ` [dpdk-dev] [PATCH v11 1/3] devtools: script to track symbols over releases Ray Kinsella
@ 2021-09-08 15:12 5% ` Ray Kinsella
2021-09-08 15:12 17% ` [dpdk-dev] [PATCH v11 3/3] maintainers: add new abi scripts Ray Kinsella
2 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-09-08 15:12 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr,
aconole, roy.fan.zhang, arkadiuszx.kusztal, gakhil
Use this script with the output of the DPDK symbol tool, to notify
maintainers of expired symbols by email. You need to define the environment
variable DPDK_GETMAINTAINER_PATH for this tool to work.
Use terminal output to review the emails before sending.
e.g.
$ devtools/symbol-tool.py list-expired --format-output csv \
| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
devtools/notify_expired_symbols.py --format-output terminal
Then use email output to send the emails to the maintainers.
e.g.
$ devtools/symbol-tool.py list-expired --format-output csv \
| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
devtools/notify_expired_symbols.py --format-output email \
--smtp-server <server> --sender <someone@somewhere.com> \
--password <password> --cc <someone@somewhere.com>
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
devtools/notify-symbol-maintainers.py | 302 ++++++++++++++++++++++++++
1 file changed, 302 insertions(+)
create mode 100755 devtools/notify-symbol-maintainers.py
diff --git a/devtools/notify-symbol-maintainers.py b/devtools/notify-symbol-maintainers.py
new file mode 100755
index 0000000000..edf330f88b
--- /dev/null
+++ b/devtools/notify-symbol-maintainers.py
@@ -0,0 +1,302 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+# pylint: disable=invalid-name
+'''Tool to notify maintainers of expired symbols'''
+import os
+import smtplib
+import ssl
+import sys
+import subprocess
+import argparse
+from argparse import RawTextHelpFormatter
+import time
+from email.message import EmailMessage
+from pathlib import Path
+
+DESCRIPTION = '''
+Use this script with the output of the DPDK symbol tool, to notify maintainers
+and contributors of expired symbols by email. You need to define the environment
+variable DPDK_GETMAINTAINER_PATH for this tool to work.
+
+Use terminal output to review the emails before sending.
+e.g.
+$ devtools/symbol-tool.py list-expired --format-output csv \\
+| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
+{s} --format-output terminal
+
+Then use email output to send the emails to the maintainers.
+e.g.
+$ devtools/symbol-tool.py list-expired --format-output csv \\
+| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
+{s} --format-output email \\
+--smtp-server <server> --sender <someone@somewhere.com> --password <password> \\
+--cc <someone@somewhere.com>
+''' # noqa: E501
+
+EMAIL_TEMPLATE = '''Hi there,
+
+Please note the symbols listed below have expired. In line with the DPDK ABI
+policy, they should be scheduled for removal, in the next DPDK release.
+
+For more information, please see the DPDK ABI Policy, section 3.5.3.
+https://doc.dpdk.org/guides/contributing/abi_policy.html
+
+Thanks,
+
+The DPDK Symbol Bot
+
+''' # noqa: E501
+
+ABI_POLICY = 'doc/guides/contributing/abi_policy.rst'
+DPDK_GMP_ENV_VAR = 'DPDK_GETMAINTAINER_PATH'
+MAINTAINERS = 'MAINTAINERS'
+get_maintainer = ['devtools/get-maintainer.sh',
+ '--email', '-f']
+
+
+class EnvironException(Exception):
+ '''Subclass exception for Pylint\'s happiness.'''
+
+
+def _die_on_exception(e):
+ '''Print an exception, and quit'''
+
+ print('Fatal Error: ' + str(e))
+ sys.exit()
+
+
+def _check_get_maintainers_env():
+ '''Check get maintainers scripts are setup'''
+
+ if not Path(get_maintainer[0]).is_file():
+ raise EnvironException('Cannot locate DPDK\'s get maintainers script, '
+ ' usually at $' + get_maintainer[0] + '.')
+
+ if DPDK_GMP_ENV_VAR not in os.environ:
+ raise EnvironException(DPDK_GMP_ENV_VAR + ' is not defined.')
+
+ if not Path(os.environ[DPDK_GMP_ENV_VAR]).is_file():
+ raise EnvironException('Cannot locate get maintainers script, usually'
+ ' at ' + DPDK_GMP_ENV_VAR + '.')
+
+
+def _get_maintainers(libpath):
+ '''Get the maintainers for given library'''
+
+ try:
+ _check_get_maintainers_env()
+ except EnvironException as e:
+ _die_on_exception(e)
+
+ try:
+ cmd = get_maintainer + [libpath]
+ result = subprocess.run(cmd,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError as e:
+ _die_on_exception(e)
+
+ if result is None:
+ return None
+
+ email = result.stdout.decode('utf-8')
+ if email == '':
+ return None
+
+ email = list(filter(None, email.split('\n')))
+ return email
+
+
+default_maintainers = _get_maintainers(ABI_POLICY) + \
+ _get_maintainers(MAINTAINERS)
+
+
+def get_maintainers(libpath):
+ '''Get the maintainers for given library'''
+ maintainers = _get_maintainers(libpath)
+
+ if maintainers is None:
+ maintainers = default_maintainers
+
+ return maintainers
+
+
+def get_message(library, symbols, config):
+ '''Build email message from symbols, config and maintainers'''
+ contributors = {}
+ message = {}
+ maintainers = get_maintainers(library)
+
+ if maintainers != default_maintainers:
+ message['CC'] = default_maintainers.copy()
+
+ if 'CC' in config:
+ message.setdefault('CC', []).append(config['CC'])
+
+ message['Subject'] = 'Expired symbols in {}\n'.format(library)
+
+ body = EMAIL_TEMPLATE
+ body += '{:<50}{:<25}{:<25}\n'.format('Symbol', 'Contributor', 'Email')
+ for sym in symbols:
+ body += ('{:<50}{:<25}{:<25}\n'.format(sym,
+ symbols[sym]['name'],
+ symbols[sym]['email']))
+ email = symbols[sym]['email']
+ contributors[email] = ''
+
+ contributors = list(contributors.keys())
+
+ message['To'] = maintainers + contributors
+ message['Body'] = body
+
+ return message
+
+
+class OutputEmail():
+ '''Format the output for email'''
+
+ def __init__(self, config):
+ self.config = config
+
+ self.terminal = OutputTerminal(config)
+ context = ssl.create_default_context()
+
+ # Try to log in to server and send email
+ try:
+ self.server = smtplib.SMTP(config['smtp_server'], 587)
+ self.server.starttls(context=context) # Secure the connection
+ self.server.login(config['sender'], config['password'])
+ except EnvironException as e:
+ _die_on_exception(e)
+
+ def message(self, message):
+ '''send email'''
+ self.terminal.message(message)
+
+ msg = EmailMessage()
+ msg.set_content(message.pop('Body'))
+
+ for key in message.keys():
+ msg[key] = message[key]
+
+ msg['From'] = self.config['sender']
+ msg['Reply-To'] = 'no-reply@dpdk.org'
+
+ self.server.send_message(msg)
+
+ time.sleep(1)
+
+ def __del__(self):
+ self.server.quit()
+
+
+class OutputTerminal(): # pylint: disable=too-few-public-methods
+ '''Format the output for the terminal'''
+
+ def __init__(self, config):
+ self.config = config
+
+ def message(self, message):
+ '''Print email to terminal'''
+
+ terminal = 'To:' + ', '.join(message['To']) + '\n'
+ if 'sender' in self.config.keys():
+ terminal += 'From:' + self.config['sender'] + '\n'
+
+ terminal += 'Reply-To:' + 'no-reply@dpdk.org' + '\n'
+
+ if 'CC' in message:
+ terminal += 'CC:' + ', '.join(message['CC']) + '\n'
+
+ terminal += 'Subject:' + message['Subject'] + '\n'
+ terminal += 'Body:' + message['Body'] + '\n'
+
+ print(terminal)
+ print('-' * 80)
+
+
+def parse_config(args):
+ '''put the command line args in the right places'''
+ config = {}
+ error_msg = None
+
+ outputs = {
+ None: OutputTerminal,
+ 'terminal': OutputTerminal,
+ 'email': OutputEmail
+ }
+
+ if args.format_output == 'email':
+ if args.smtp_server is None:
+ error_msg = 'SMTP server'
+ else:
+ config['smtp_server'] = args.smtp_server
+
+ if args.sender is None:
+ error_msg = 'sender'
+ else:
+ config['sender'] = args.sender
+
+ if args.password is None:
+ error_msg = 'password'
+ else:
+ config['password'] = args.password
+
+ if args.cc is not None:
+ config['CC'] = args.cc
+
+ if error_msg is not None:
+ print('Please specify a {} for email output'.format(error_msg))
+ return None
+
+ config['output'] = outputs[args.format_output]
+ return config
+
+
+def main():
+ '''Main entry point'''
+ parser = \
+ argparse.ArgumentParser(description=DESCRIPTION.format(s=__file__),
+ formatter_class=RawTextHelpFormatter)
+ parser.add_argument('--format-output',
+ choices=['terminal', 'email'],
+ default='terminal')
+ parser.add_argument('--smtp-server')
+ parser.add_argument('--password')
+ parser.add_argument('--sender')
+ parser.add_argument('--cc')
+
+ args = parser.parse_args()
+ config = parse_config(args)
+ if config is None:
+ return
+
+ symbols = {}
+ lastlib = library = ''
+
+ output = config['output'](config)
+
+ for line in sys.stdin:
+ line = line.rstrip('\n')
+
+ if line.find('mapfile') >= 0:
+ continue
+ library, symbol, name, email = line.split(',')
+
+ if library != lastlib:
+ message = get_message(lastlib, symbols, config)
+ output.message(message)
+ symbols = {}
+
+ lastlib = library
+ symbols[symbol] = {'name': name, 'email': email}
+
+ # print the last library
+ message = get_message(lastlib, symbols, config)
+ output.message(message)
+
+
+if __name__ == '__main__':
+ main()
--
2.26.2
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v11 1/3] devtools: script to track symbols over releases
2021-09-08 15:12 3% ` [dpdk-dev] [PATCH v11 0/3] devtools: scripts to count and track symbols Ray Kinsella
@ 2021-09-08 15:12 5% ` Ray Kinsella
2021-09-08 15:12 5% ` [dpdk-dev] [PATCH v11 2/3] devtools: script to send notifications of expired symbols Ray Kinsella
2021-09-08 15:12 17% ` [dpdk-dev] [PATCH v11 3/3] maintainers: add new abi scripts Ray Kinsella
2 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-09-08 15:12 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr,
aconole, roy.fan.zhang, arkadiuszx.kusztal, gakhil
This script tracks the growth of stable and experimental symbols
over releases since v19.11. The script has the ability to
count the added symbols between two dpdk releases, and to
list experimental symbols present in two dpdk releases
(expired symbols).
example usages:
Count symbols added since v19.11
$ devtools/symbol-tool.py count-symbols
Count symbols added since v20.11
$ devtools/symbol-tool.py count-symbols --releases v20.11,v21.05
List experimental symbols present in v20.11 and v21.05
$ devtools/symbol-tool.py list-expired --releases v20.11,v21.05
List experimental symbols in libraries only, present since v19.11
$ devtools/symbol-tool.py list-expired --directory lib
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
devtools/symbol-tool.py | 505 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 505 insertions(+)
create mode 100755 devtools/symbol-tool.py
diff --git a/devtools/symbol-tool.py b/devtools/symbol-tool.py
new file mode 100755
index 0000000000..a0b81c1b90
--- /dev/null
+++ b/devtools/symbol-tool.py
@@ -0,0 +1,505 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+# pylint: disable=invalid-name
+'''Tool to count or list symbols in each DPDK release'''
+from pathlib import Path
+import sys
+import os
+import subprocess
+import argparse
+from argparse import RawTextHelpFormatter
+import re
+import datetime
+try:
+ from parsley import makeGrammar
+except ImportError:
+ print('This script uses the package Parsley to parse C Mapfiles.\n'
+ 'This can be installed with \"pip install parsley".')
+ sys.exit()
+
+DESCRIPTION = '''
+This script tracks the growth of stable and experimental symbols
+over releases since v19.11. The script has the ability to
+count the added symbols between two dpdk releases, and to
+list experimental symbols present in two dpdk releases
+(expired symbols), including the name & email of the original contributor.
+
+example usages:
+
+Count symbols added since v19.11
+$ {s} count-symbols
+
+Count symbols added since v20.11
+$ {s} count-symbols --releases v20.11,v21.05
+
+List experimental symbols present in v20.11 and v21.05
+$ {s} list-expired --releases v20.11,v21.05
+
+List experimental symbols in libraries only, present since v19.11
+$ {s} list-expired --directory lib
+'''
+
+MAP_GRAMMAR = r"""
+
+ws = (' ' | '\r' | '\n' | '\t')*
+
+ABI_VER = ({})
+DPDK_VER = ('DPDK_' ABI_VER)
+ABI_NAME = ('INTERNAL' | 'EXPERIMENTAL' | DPDK_VER)
+comment = '#' (~'\n' anything)+ '\n'
+symbol = (~(';' | '}}' | '#') anything )+:c ';' -> ''.join(c)
+global = 'global:'
+local = 'local: *;'
+symbols = comment* symbol:s ws comment* -> s
+
+abi = (abi_section+):m -> dict(m)
+abi_section = (ws ABI_NAME:e ws '{{' ws global* (~local ws symbols)*:s ws local* ws '}}' ws DPDK_VER* ';' ws) -> (e,s)
+""" # noqa: E501
+
+
+def get_abi_versions():
+ '''Returns a string of possible dpdk abi versions'''
+
+ year = datetime.date.today().year - 2000
+ tags = " |".join(['\'{}\''.format(i)
+ for i in reversed(range(21, year + 1))])
+ tags = tags + ' | \'20.0.1\' | \'20.0\' | \'20\''
+
+ return tags
+
+
+def get_dpdk_releases():
+ '''Returns a list of dpdk release tags names since v19.11'''
+
+ year = datetime.date.today().year - 2000
+ year_range = "|".join("{}".format(i) for i in range(19, year + 1))
+ pattern = re.compile(r'^\"v(' + year_range + r')\.\d{2}\"$')
+
+ cmd = ['git', 'for-each-ref', '--sort=taggerdate', '--format', '"%(tag)"']
+ try:
+ result = subprocess.run(cmd,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ print("Failed to interogate git for release tags")
+ sys.exit()
+
+ tags = result.stdout.decode('utf-8').split('\n')
+
+ # find the non-rcs between now and v19.11
+ tags = [tag.replace('\"', '')
+ for tag in reversed(tags)
+ if pattern.match(tag)][:-3]
+
+ return tags
+
+
+def fix_directory_name(path):
+ '''Prepend librte to the source directory name'''
+ mapfilepath1 = str(path.parent.name)
+ mapfilepath2 = str(path.parents[1])
+ mapfilepath = mapfilepath2 + '/librte_' + mapfilepath1
+
+ return mapfilepath
+
+
+def directory_renamed(path, rel):
+ '''Fix removal of the librte_ from the directory names'''
+
+ mapfilepath = fix_directory_name(path)
+ tagfile = '{}:{}/{}'.format(rel, mapfilepath, path.name)
+
+ try:
+ result = subprocess.run(['git', 'show', tagfile],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ result = None
+
+ return result
+
+
+def mapfile_renamed(path, rel):
+ '''Fix renaming of the map file'''
+ newfile = None
+
+ result = subprocess.run(['git', 'ls-tree',
+ rel, str(path.parent) + '/'],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ dentries = result.stdout.decode('utf-8')
+ dentries = dentries.split('\n')
+
+ # filter entries looking for the map file
+ dentries = [dentry for dentry in dentries if dentry.endswith('.map')]
+ if len(dentries) > 1 or len(dentries) == 0:
+ return None
+
+ dparts = dentries[0].split('/')
+ newfile = dparts[len(dparts) - 1]
+
+ if newfile is not None:
+ tagfile = '{}:{}/{}'.format(rel, path.parent, newfile)
+
+ try:
+ result = subprocess.run(['git', 'show', tagfile],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ result = None
+
+ else:
+ result = None
+
+ return result
+
+
+def mapfile_and_directory_renamed(path, rel):
+ '''Fix renaming of the map file & the source directory'''
+ mapfilepath = Path("{}/{}".format(fix_directory_name(path), path.name))
+
+ return mapfile_renamed(mapfilepath, rel)
+
+
+FIX_STRATEGIES = [directory_renamed,
+ mapfile_renamed,
+ mapfile_and_directory_renamed]
+
+
+def get_symbols(map_parser, release, mapfile_path):
+ '''Count the symbols for a given release and mapfile'''
+ abi_sections = {}
+
+ tagfile = '{}:{}'.format(release, mapfile_path)
+ try:
+ result = subprocess.run(['git', 'show', tagfile],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ result = None
+
+ for fix_strategy in FIX_STRATEGIES:
+ if result is not None:
+ break
+ result = fix_strategy(mapfile_path, release)
+
+ if result is not None:
+ mapfile = result.stdout.decode('utf-8')
+ abi_sections = map_parser(mapfile).abi()
+
+ return abi_sections
+
+
+def get_terminal_rows():
+ '''Find the number of rows in the terminal'''
+
+ try:
+ return os.get_terminal_size().lines
+ except IOError:
+ return 0
+
+
+class SymbolOwner():
+ '''Find the symbols original contributors name and email'''
+ symbol_regex = {}
+ blame_regex = {'name': r'author\s(.*)',
+ 'email': r'author-mail\s<(.*)>'}
+
+ def __init__(self, libpath, symbol):
+ self.libpath = libpath
+ self.symbol = symbol
+
+ # find variable definitions in C files, and functions in headers.
+ self.symbol_regex = \
+ {'*.c': r'^(?!extern).*' + self.symbol + '[^()]*;',
+ '*.h': r'__rte_experimental(?:.*\n){0,2}.*' + self.symbol}
+
+ def find_symbol_location(self):
+ '''Find where the symbol is definited in the source'''
+ for key in self.symbol_regex:
+ for path in Path(self.libpath).rglob(key):
+ file_text = open(path).read()
+
+ # find where the symbol is defined, either preceeded by
+ # rte_experimental tag (functions)
+ # or followed by a ; (variables)
+
+ exp = self.symbol_regex[key]
+ pattern = re.compile(exp, re.MULTILINE)
+ search = pattern.search(file_text)
+
+ if search is not None:
+ symbol_pos = search.span()[1]
+ symbol_line = file_text.count('\n', 0, symbol_pos) + 1
+
+ return [str(path), symbol_line]
+ return None
+
+ def find_symbol_owner(self):
+ '''Find the symbols original contributors name and email'''
+ owners = {}
+ location = self.find_symbol_location()
+
+ if location is None:
+ return None
+
+ line = '-L {},{}'.format(location[1], location[1])
+ # git blame -p(orcelain) -L(ine) path
+ args = ['-p', line, location[0]]
+
+ try:
+ result = subprocess.run(['git', 'blame'] + args,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ return None
+
+ blame = result.stdout.decode('utf-8')
+ for key in self.blame_regex:
+ pattern = re.compile(self.blame_regex[key], re.MULTILINE)
+ match = pattern.search(blame)
+
+ owners[key] = match.groups()[0]
+
+ return owners
+
+
+class SymbolCountOutput():
+ '''Format the output to supported formats'''
+ output_fmt = ""
+ column_fmt = ""
+
+ def __init__(self, format_output, dpdk_releases):
+ self.OUTPUT_FORMATS[format_output](self, dpdk_releases)
+ self.column_titles = ['mapfile'] + dpdk_releases
+
+ self.terminal_rows = get_terminal_rows()
+ self.row = 0
+
+ def set_terminal_output(self, dpdk_rel):
+ '''Set the output format to Tabbed Separated Values'''
+
+ self.output_fmt = '{:<50}' + \
+ ''.join(['{:<6}{:<6}'] * (len(dpdk_rel)))
+ self.column_fmt = '{:50}' + \
+ ''.join(['{:<12}'] * (len(dpdk_rel)))
+
+ def set_csv_output(self, dpdk_rel):
+ '''Set the output format to Comma Separated Values'''
+
+ self.output_fmt = '{},' + \
+ ','.join(['{},{}'] * (len(dpdk_rel)))
+ self.column_fmt = '{},' + \
+ ','.join(['{},'] * (len(dpdk_rel)))
+
+ def print_columns(self):
+ '''Print column rows with release names'''
+ print(self.column_fmt.format(*self.column_titles))
+ self.row += 1
+
+ def print_row(self, mapfile, symbols):
+ '''Print row of symbol values'''
+ mapfile = str(mapfile)
+ print(self.output_fmt.format(*([mapfile] + symbols)))
+ self.row += 1
+
+ if((self.terminal_rows > 0) and
+ ((self.row % self.terminal_rows) == 0)):
+ self.print_columns()
+
+ OUTPUT_FORMATS = {None: set_terminal_output,
+ 'terminal': set_terminal_output,
+ 'csv': set_csv_output}
+
+
+class ListExpiredOutput():
+ '''Format the output to supported formats'''
+ output_fmt = ""
+ column_fmt = ""
+
+ def __init__(self, format_output, dpdk_releases):
+ self.terminal = True
+ self.OUTPUT_FORMATS[format_output](self, dpdk_releases)
+ self.column_titles = ['mapfile'] + \
+ ['expired (' + ','.join(dpdk_releases) + ')'] + \
+ ['contributor name', 'contributor email']
+
+ def set_terminal_output(self, _):
+ '''Set the output format to Tabbed Separated Values'''
+
+ self.output_fmt = '{:<50}{:<50}{:<25}{:<25}'
+ self.column_fmt = '{:50}{:50}{:25}{:25}'
+
+ def set_csv_output(self, _):
+ '''Set the output format to Comma Separated Values'''
+
+ self.output_fmt = '{},{},{},{}'
+ self.column_fmt = '{},{},{},{}'
+ self.terminal = False
+
+ def print_columns(self):
+ '''Print column rows with release names'''
+ print(self.column_fmt.format(*self.column_titles))
+
+ def print_row(self, mapfile, symbols, owner):
+ '''Print row of symbol values'''
+
+ for symbol in symbols:
+ mapfile = str(mapfile)
+ name = owner[symbol]['name'] \
+ if owner[symbol] is not None else ''
+ email = owner[symbol]['email'] \
+ if owner[symbol] is not None else ''
+
+ print(self.output_fmt.format(mapfile, symbol, name, email))
+ if self.terminal:
+ mapfile = ''
+
+ OUTPUT_FORMATS = {None: set_terminal_output,
+ 'terminal': set_terminal_output,
+ 'csv': set_csv_output}
+
+
+class CountSymbolsAction:
+ ''' Logic to count symbols added since a give release '''
+ IGNORE_SECTIONS = ['EXPERIMENTAL', 'INTERNAL']
+
+ def __init__(self, mapfile_path, mapfile_parser, format_output):
+ self.path = mapfile_path
+ self.parser = mapfile_parser
+ self.format_output = format_output
+ self.symbols_count = []
+
+ def add_mapfile(self, release):
+ ''' add a version mapfile '''
+ symbol_count = experimental_count = 0
+
+ symbols = get_symbols(self.parser, release, self.path)
+
+ # which versions are present, and we care about
+ abi_vers = [abi_ver
+ for abi_ver in symbols
+ if abi_ver not in self.IGNORE_SECTIONS]
+
+ for abi_ver in abi_vers:
+ symbol_count += len(symbols[abi_ver])
+
+ # count experimental symbols
+ if 'EXPERIMENTAL' in symbols.keys():
+ experimental_count = len(symbols['EXPERIMENTAL'])
+
+ self.symbols_count += [symbol_count, experimental_count]
+
+ def __del__(self):
+ self.format_output.print_row(self.path.parent, self.symbols_count)
+
+
+class ListExpiredAction:
+ ''' Logic to list expired symbols between two releases '''
+
+ def __init__(self, mapfile_path, mapfile_parser, format_output):
+ self.path = mapfile_path
+ self.parser = mapfile_parser
+ self.format_output = format_output
+ self.experimental_symbols = []
+
+ def add_mapfile(self, release):
+ ''' add a version mapfile '''
+ symbols = get_symbols(self.parser, release, self.path)
+
+ if 'EXPERIMENTAL' in symbols.keys():
+ experimental = [exp.strip() for exp in symbols['EXPERIMENTAL']]
+
+ self.experimental_symbols.append(experimental)
+
+ def __del__(self):
+ if len(self.experimental_symbols) != 2:
+ return
+
+ tmp = self.experimental_symbols
+ # find symbols present in both dpdk releases
+ intersect_syms = [sym for sym in tmp[0] if sym in tmp[1]]
+
+ # check for empty set
+ if intersect_syms == []:
+ return
+
+ sym_owner = {}
+ for sym in intersect_syms:
+ sym_owner[sym] = \
+ SymbolOwner(self.path.parent, sym).find_symbol_owner()
+
+ self.format_output.print_row(self.path.parent,
+ intersect_syms,
+ sym_owner)
+
+
+SRC_DIRECTORIES = 'drivers,lib'
+
+ACTIONS = {None: CountSymbolsAction,
+ 'count-symbols': CountSymbolsAction,
+ 'list-expired': ListExpiredAction}
+
+ACTION_OUTPUT = {None: SymbolCountOutput,
+ 'count-symbols': SymbolCountOutput,
+ 'list-expired': ListExpiredOutput}
+
+
+def main():
+ '''Main entry point'''
+
+ dpdk_releases = get_dpdk_releases()
+
+ parser = \
+ argparse.ArgumentParser(description=DESCRIPTION.format(s=__file__),
+ formatter_class=RawTextHelpFormatter)
+
+ parser.add_argument('mode', choices=['count-symbols', 'list-expired'])
+ parser.add_argument('--format-output', choices=['terminal', 'csv'],
+ default='terminal')
+ parser.add_argument('--directory', choices=SRC_DIRECTORIES.split(','),
+ default=SRC_DIRECTORIES)
+ parser.add_argument('--releases',
+ help='2 x comma separated release tags e.g. \''
+ + ','.join([dpdk_releases[0], dpdk_releases[-1]])
+ + '\'')
+ args = parser.parse_args()
+
+ if args.releases is not None:
+ dpdk_releases = args.releases.split(',')
+
+ if args.mode == 'list-expired':
+ if len(dpdk_releases) < 2:
+ sys.exit('Please specify two releases to compare '
+ 'in \'list-expired\' mode.')
+ dpdk_releases = [dpdk_releases[0],
+ dpdk_releases[len(dpdk_releases) - 1]]
+
+ action = ACTIONS[args.mode]
+ format_output = ACTION_OUTPUT[args.mode](args.format_output, dpdk_releases)
+
+ map_grammar = MAP_GRAMMAR.format(get_abi_versions())
+ map_parser = makeGrammar(map_grammar, {})
+
+ format_output.print_columns()
+
+ for src_dir in args.directory.split(','):
+ for path in Path(src_dir).rglob('*.map'):
+ release_action = action(path, map_parser, format_output)
+
+ for release in dpdk_releases:
+ release_action.add_mapfile(release)
+
+ # all the magic happens in the destructor
+ del release_action
+
+
+if __name__ == '__main__':
+ main()
--
2.26.2
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v11 0/3] devtools: scripts to count and track symbols
2021-08-31 14:50 3% ` [dpdk-dev] [PATCH v10 0/3] devtools: scripts to count and track symbols Ray Kinsella
2021-09-03 13:23 3% ` [dpdk-dev] [PATCH v11 " Ray Kinsella
@ 2021-09-08 15:12 3% ` Ray Kinsella
2021-09-08 15:12 5% ` [dpdk-dev] [PATCH v11 1/3] devtools: script to track symbols over releases Ray Kinsella
` (2 more replies)
2021-09-08 15:13 3% ` [dpdk-dev] [PATCH v12 0/4] devtools: scripts to count and track symbols Ray Kinsella
2021-09-09 13:48 3% ` [dpdk-dev] [PATCH v13 0/4] devtools: scripts to count and track symbols Ray Kinsella
4 siblings, 3 replies; 200+ results
From: Ray Kinsella @ 2021-09-08 15:12 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr,
aconole, roy.fan.zhang, arkadiuszx.kusztal, gakhil
Scripts to count and track the lifecycle of DPDK symbols.
The symbol-tool script reports on the growth of symbols over releases
and list expired symbols. The notify-symbol-maintainers script
consumes the input from symbol-tool and generates email notifications
of expired symbols.
v2: reworked to fix pylint errors
v3: sent with the correct in-reply-to
v4: fix typos picked up by the CI
v5: fix terminal_size & directory args
v6: added list-expired, to list expired experimental symbols
v7: fix typo in comments
v8: added tool to notify maintainers of expired symbols
v9: removed hardcoded emails addressed and script names
v10: added ability to identify and notify the original contributors
v11: addressed feedback from Aaron Conole, including PEP8 errors.
Ray Kinsella (3):
devtools: script to track symbols over releases
devtools: script to send notifications of expired symbols
maintainers: add new abi scripts
MAINTAINERS | 2 +
devtools/notify-symbol-maintainers.py | 302 +++++++++++++++
devtools/symbol-tool.py | 505 ++++++++++++++++++++++++++
3 files changed, 809 insertions(+)
create mode 100755 devtools/notify-symbol-maintainers.py
create mode 100755 devtools/symbol-tool.py
--
2.26.2
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] fib: promote experimental API's to stable
2021-09-08 13:57 3% ` Medvedkin, Vladimir
@ 2021-09-08 15:10 0% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-08 15:10 UTC (permalink / raw)
To: Medvedkin, Vladimir; +Cc: dev, konstantin.ananyev
On Wed, 8 Sep 2021 15:57:54 +0200
"Medvedkin, Vladimir" <vladimir.medvedkin@intel.com> wrote:
> Hi Stephen,
>
> On 07/09/2021 02:34, Stephen Hemminger wrote:
> > On Mon, 6 Sep 2021 17:01:15 +0100
> > Vladimir Medvedkin <vladimir.medvedkin@intel.com> wrote:
> >
> >> diff --git a/lib/fib/version.map b/lib/fib/version.map
> >> index be975ea..af76add 100644
> >> --- a/lib/fib/version.map
> >> +++ b/lib/fib/version.map
> >> @@ -1,4 +1,4 @@
> >> -EXPERIMENTAL {
> >> +DPDK_22 {
> >> global:
> >>
> >> rte_fib_add;
> >> --
> >
> > I think you should add to existing version number
> > \
>
> Could you please clarify what did you mean? There was no existing stable
> API section in the fib's version.map and current ABI version is DPDK_22.
>
Ok, I missed that. Was looking at so many other files where
version was DPDK_20.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] fib: promote experimental API's to stable
@ 2021-09-08 13:57 3% ` Medvedkin, Vladimir
2021-09-08 15:10 0% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Medvedkin, Vladimir @ 2021-09-08 13:57 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev, konstantin.ananyev
Hi Stephen,
On 07/09/2021 02:34, Stephen Hemminger wrote:
> On Mon, 6 Sep 2021 17:01:15 +0100
> Vladimir Medvedkin <vladimir.medvedkin@intel.com> wrote:
>
>> diff --git a/lib/fib/version.map b/lib/fib/version.map
>> index be975ea..af76add 100644
>> --- a/lib/fib/version.map
>> +++ b/lib/fib/version.map
>> @@ -1,4 +1,4 @@
>> -EXPERIMENTAL {
>> +DPDK_22 {
>> global:
>>
>> rte_fib_add;
>> --
>
> I think you should add to existing version number
> \
Could you please clarify what did you mean? There was no existing stable
API section in the fib's version.map and current ABI version is DPDK_22.
--
Regards,
Vladimir
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [RFC 04/15] eventdev: move inline APIs into separate structure
2021-08-23 19:40 2% ` [dpdk-dev] [RFC 04/15] eventdev: move inline APIs into separate structure pbhagavatula
@ 2021-09-08 12:03 0% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-09-08 12:03 UTC (permalink / raw)
To: pbhagavatula, jerinj; +Cc: konstantin.ananyev, dev
On 23/08/2021 20:40, pbhagavatula@marvell.com wrote:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Move fastpath inline function pointers from rte_eventdev into a
> separate structure accessed via a flat array.
> The intension is to make rte_eventdev and related structures private
> to avoid future API/ABI breakages.`
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> lib/eventdev/eventdev_pmd.h | 10 ++++
> lib/eventdev/eventdev_private.c | 100 +++++++++++++++++++++++++++++++
> lib/eventdev/meson.build | 1 +
> lib/eventdev/rte_eventdev.c | 25 +++++++-
> lib/eventdev/rte_eventdev_core.h | 44 ++++++++++++++
> lib/eventdev/version.map | 4 ++
> 6 files changed, 183 insertions(+), 1 deletion(-)
> create mode 100644 lib/eventdev/eventdev_private.c
>
I will deferred to others on the wisdom of exposing rte_eventdev_api.
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 1/3] security: add option to configure tunnel header verification
2021-09-08 8:21 5% ` [dpdk-dev] [PATCH 1/3] security: " Tejasree Kondoj
2021-09-08 7:46 0% ` Hemant Agrawal
@ 2021-09-08 10:42 3% ` Akhil Goyal
1 sibling, 0 replies; 200+ results
From: Akhil Goyal @ 2021-09-08 10:42 UTC (permalink / raw)
To: Tejasree Kondoj, Radu Nicolau, Declan Doherty
Cc: Tejasree Kondoj, Anoob Joseph, Ankur Dwivedi,
Jerin Jacob Kollanukkaran, Konstantin Ananyev, Ciara Power,
Hemant Agrawal, Gagandeep Singh, Fan Zhang, Archana Muniganti,
dev
> Add option to indicate whether outer header verification
> need to be done as part of inbound IPsec processing.
>
> With inline IPsec processing, SA lookup would be happening
> in the Rx path of rte_ethdev. When rte_flow is configured to
> support more than one SA, SPI would be used to lookup SA.
> In such cases, additional verification would be required to
> ensure duplicate SPIs are not getting processed in the inline path.
>
> For lookaside cases, the same option can be used by application
> to offload tunnel verification to the PMD.
>
> These verifications would help in averting possible DoS attacks.
>
> Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
> ---
> doc/guides/rel_notes/release_21_11.rst | 5 +++++
Deprecation notice should also be removed for this feature addition/
ABI breakage.
Other than that
Acked-by: Akhil Goyal <gakhil@marvell.com>
> lib/security/rte_security.h | 17 +++++++++++++++++
> 2 files changed, 22 insertions(+)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index 0e3ed28378..b0606cb542 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -136,6 +136,11 @@ ABI Changes
> soft and hard SA expiry limits. Limits can be either in units of packets or
> bytes.
>
> +* security: add IPsec SA option to configure tunnel header verification
> +
> + * Added SA option to indicate whether outer header verification need to
> be
> + done as part of inbound IPsec processing.
> +
>
> Known Issues
> ------------
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 95c169d6cf..2a61cad885 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -55,6 +55,14 @@ enum rte_security_ipsec_tunnel_type {
> /**< Outer header is IPv6 */
> };
>
> +/**
> + * IPSEC tunnel header verification mode
> + *
> + * Controls how outer IP header is verified in inbound.
> + */
> +#define RTE_SECURITY_IPSEC_TUNNEL_VERIFY_DST_ADDR 0x1
> +#define RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR 0x2
> +
> /**
> * Security context for crypto/eth devices
> *
> @@ -195,6 +203,15 @@ struct rte_security_ipsec_sa_options {
> * by the PMD.
> */
> uint32_t iv_gen_disable : 1;
> +
> + /** Verify tunnel header in inbound
> + * * ``RTE_SECURITY_IPSEC_TUNNEL_VERIFY_DST_ADDR``: Verify
> destination
> + * IP address.
> + *
> + * * ``RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR``: Verify
> both
> + * source and destination IP addresses.
> + */
> + uint32_t tunnel_hdr_verify : 2;
> };
>
> /** IPSec security association direction */
> --
> 2.27.0
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
2021-09-07 8:47 4% ` Ferruh Yigit
@ 2021-09-08 10:29 0% ` Anoob Joseph
2021-09-13 6:56 0% ` Xu, Rosen
0 siblings, 1 reply; 200+ results
From: Anoob Joseph @ 2021-09-08 10:29 UTC (permalink / raw)
To: Ferruh Yigit, Xu, Rosen, Andrew Rybchenko
Cc: radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, Ankur Dwivedi, andrew.rybchenko,
Akhil Goyal, dev
Hi Ferruh, Rosen, Andrew,
Please see inline.
Thanks,
Anoob
> Subject: [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
>
> External Email
>
> ----------------------------------------------------------------------
> On 8/23/2021 11:02 AM, Akhil Goyal wrote:
> > Reassembly is a costly operation if it is done in software, however,
> > if it is offloaded to HW, it can considerably save application cycles.
> > The operation becomes even more costlier if IP fragmants are
> > encrypted.
> >
> > To resolve above two issues, a new offload
> DEV_RX_OFFLOAD_REASSEMBLY
> > is introduced in ethdev for devices which can attempt reassembly of
> > packets in hardware.
> > rte_eth_dev_info is added with the reassembly capabilities which a
> > device can support.
> > Now, if IP fragments are encrypted, reassembly can also be attempted
> > while doing inline IPsec processing.
> > This is controlled by a flag in rte_security_ipsec_sa_options to
> > enable reassembly of encrypted IP fragments in the inline path.
> >
> > The resulting reassembled packet would be a typical segmented mbuf in
> > case of success.
> >
> > And if reassembly of fragments is failed or is incomplete (if
> > fragments do not come before the reass_timeout), the mbuf is updated
> > with an ol_flag PKT_RX_REASSEMBLY_INCOMPLETE and mbuf is returned
> as
> > is. Now application may decide the fate of the packet to wait more for
> > fragments to come or drop.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > ---
> > lib/ethdev/rte_ethdev.c | 1 +
> > lib/ethdev/rte_ethdev.h | 18 +++++++++++++++++-
> > lib/mbuf/rte_mbuf_core.h | 3 ++-
> > lib/security/rte_security.h | 10 ++++++++++
> > 4 files changed, 30 insertions(+), 2 deletions(-)
> >
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index
> > 9d95cd11e1..1ab3a093cf 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -119,6 +119,7 @@ static const struct {
> > RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
> > RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
> > RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
> > + RTE_RX_OFFLOAD_BIT2STR(REASSEMBLY),
> > RTE_RX_OFFLOAD_BIT2STR(SCATTER),
> > RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
> > RTE_RX_OFFLOAD_BIT2STR(SECURITY),
> > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> > d2b27c351f..e89a4dc1eb 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -1360,6 +1360,7 @@ struct rte_eth_conf {
> > #define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
> > #define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
> > #define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
> > +#define DEV_RX_OFFLOAD_REASSEMBLY 0x00001000
>
> previous '0x00001000' was 'DEV_RX_OFFLOAD_CRC_STRIP', it has been long
> that offload has been removed, but not sure if it cause any problem to re-
> use it.
>
> > #define DEV_RX_OFFLOAD_SCATTER 0x00002000
> > /**
> > * Timestamp is set by the driver in
> RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
> > @@ -1477,6 +1478,20 @@ struct rte_eth_dev_portconf {
> > */
> > #define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID
> (UINT16_MAX)
> >
> > +/**
> > + * Reassembly capabilities that a device can support.
> > + * The device which can support reassembly offload should set
> > + * DEV_RX_OFFLOAD_REASSEMBLY
> > + */
> > +struct rte_eth_reass_capa {
> > + /** Maximum time in ns that a fragment can wait for further
> fragments */
> > + uint64_t reass_timeout;
> > + /** Maximum number of fragments that device can reassemble */
> > + uint16_t max_frags;
> > + /** Reserved for future capabilities */
> > + uint16_t reserved[3];
> > +};
> > +
>
> I wonder if there is any other hardware around supports reassembly offload,
> it would be good to get more feedback on the capabilities list.
>
> > /**
> > * Ethernet device associated switch information
> > */
> > @@ -1582,8 +1597,9 @@ struct rte_eth_dev_info {
> > * embedded managed interconnect/switch.
> > */
> > struct rte_eth_switch_info switch_info;
> > + /* Reassembly capabilities of a device for reassembly offload */
> > + struct rte_eth_reass_capa reass_capa;
> >
> > - uint64_t reserved_64s[2]; /**< Reserved for future fields */
>
> Reserved fields were added to be able to update the struct without breaking
> the ABI, so that a critical change doesn't have to wait until next ABI break
> release.
> Since this is ABI break release, we can keep the reserved field and add the
> new struct. Or this can be an opportunity to get rid of the reserved field.
>
> Personally I have no objection to get rid of the reserved field, but better to
> agree on this explicitly.
>
> > void *reserved_ptrs[2]; /**< Reserved for future fields */
> > };
> >
> > diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h index
> > bb38d7f581..cea25c87f7 100644
> > --- a/lib/mbuf/rte_mbuf_core.h
> > +++ b/lib/mbuf/rte_mbuf_core.h
> > @@ -200,10 +200,11 @@ extern "C" {
> > #define PKT_RX_OUTER_L4_CKSUM_BAD (1ULL << 21)
> > #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> > #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL
> << 22))
> > +#define PKT_RX_REASSEMBLY_INCOMPLETE (1ULL << 23)
> >
>
> Similar comment with Andrew's, what is the expectation from application if
> this flag exists? Can we drop it to simplify the logic in the application?
[Anoob] There can be few cases where hardware/NIC attempts inline reassembly but it fails to complete it
1. Number of fragments is larger than what is supported by the hardware
2. Hardware reassembly resources are exhausted (due to limited reassembly contexts etc)
3. Reassembly errors such as overlapping fragments
4. Wait time exhausted (or reassembly timeout)
In such cases, application would be required to retrieve the original fragments so that it can attempt reassembly in software. The incomplete flag is useful for 2 purposes basically,
1. Application would need to retrieve the time the fragment has already spend in hardware reassembly so that software reassembly attempt can compensate for it. Otherwise, reassembly timeout across hardware + software will not be accurate
2. Retrieve original fragments. With this proposal, an incomplete reassembly would result in a chained mbuf but the segments need not be consecutive. To explain bit more,
Suppose we have a packet that is fragmented into 3 fragments, and fragment 3 & fragment 1 arrives in that order. Fragment 2 didn't arrive and hardware ultimately pushes it. In that case, application would be receiving a chained/segmented mbuf with fragment 1 & fragment 3 chained.
Now, this chained mbuf can't be treated like a regular chained mbuf. Each fragment would have its IP hdr and there are fragments missing in between. The only thing application is expected to do is, retrieve fragments, push it to s/w reassembly.
>
> > /* add new RX flags here, don't forget to update PKT_FIRST_FREE */
> >
> > -#define PKT_FIRST_FREE (1ULL << 23)
> > +#define PKT_FIRST_FREE (1ULL << 24)
> > #define PKT_LAST_FREE (1ULL << 40)
> >
> > /* add new TX flags here, don't forget to update PKT_LAST_FREE */
> > diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> > index 88d31de0a6..364eeb5cd4 100644
> > --- a/lib/security/rte_security.h
> > +++ b/lib/security/rte_security.h
> > @@ -181,6 +181,16 @@ struct rte_security_ipsec_sa_options {
> > * * 0: Disable per session security statistics collection for this SA.
> > */
> > uint32_t stats : 1;
> > +
> > + /** Enable reassembly on incoming packets.
> > + *
> > + * * 1: Enable driver to try reassembly of encrypted IP packets for
> > + * this SA, if supported by the driver. This feature will work
> > + * only if rx_offload DEV_RX_OFFLOAD_REASSEMBLY is set in
> > + * inline ethernet device.
> > + * * 0: Disable reassembly of packets (default).
> > + */
> > + uint32_t reass_en : 1;
> > };
> >
> > /** IPSec security association direction */
> >
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC PATCH v5 1/5] sched: add PIE based congestion management
2021-09-07 19:14 3% ` Stephen Hemminger
@ 2021-09-08 8:49 3% ` Liguzinski, WojciechX
0 siblings, 0 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-09-08 8:49 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, Singh, Jasvinder, Dumitrescu, Cristian, Ajmera, Megha
Thanks Stephen,
I will do my best to apply your comments.
Best Regards,
Wojciech Liguzinski
-----Original Message-----
From: Stephen Hemminger <stephen@networkplumber.org>
Sent: Tuesday, September 7, 2021 9:15 PM
To: Liguzinski, WojciechX <wojciechx.liguzinski@intel.com>
Cc: dev@dpdk.org; Singh, Jasvinder <jasvinder.singh@intel.com>; Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; Ajmera, Megha <megha.ajmera@intel.com>
Subject: Re: [dpdk-dev] [RFC PATCH v5 1/5] sched: add PIE based congestion management
On Tue, 7 Sep 2021 07:33:24 +0000
"Liguzinski, WojciechX" <wojciechx.liguzinski@intel.com> wrote:
> +/**
> + * @brief make a decision to drop or enqueue a packet based on probability
> + * criteria
> + *
> + * @param pie_cfg [in] config pointer to a PIE configuration
> +parameter structure
> + * @param pie [in, out] data pointer to PIE runtime data
> + * @param time [in] current time (measured in cpu cycles) */ static
> +inline void __rte_experimental _calc_drop_probability(const struct
> +rte_pie_config *pie_cfg,
> + struct rte_pie *pie, uint64_t time)
This code adds a lot of inline functions in the name of performance.
But every inline like this means the internal ABI for the implmentation has to be exposed.
You would probably get a bigger performance bump from not using floating point in the internal math, than the minor performance optimization from having so many inlines.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v3] eventdev: update crypto adapter metadata structures
2021-09-08 7:42 0% ` Shijith Thotton
@ 2021-09-08 7:53 0% ` Gujjar, Abhinandan S
0 siblings, 0 replies; 200+ results
From: Gujjar, Abhinandan S @ 2021-09-08 7:53 UTC (permalink / raw)
To: Shijith Thotton, dev
Cc: Jerin Jacob Kollanukkaran, Anoob Joseph,
Pavan Nikhilesh Bhagavatula, Akhil Goyal, Ray Kinsella,
Ankur Dwivedi
Hi Shijith,
> -----Original Message-----
> From: Shijith Thotton <sthotton@marvell.com>
> Sent: Wednesday, September 8, 2021 1:13 PM
> To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; dev@dpdk.org
> Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Anoob Joseph
> <anoobj@marvell.com>; Pavan Nikhilesh Bhagavatula
> <pbhagavatula@marvell.com>; Akhil Goyal <gakhil@marvell.com>; Ray
> Kinsella <mdr@ashroe.eu>; Ankur Dwivedi <adwivedi@marvell.com>
> Subject: RE: [PATCH v3] eventdev: update crypto adapter metadata
> structures
>
> >>
> >> >> In crypto adapter metadata, reserved bytes in request info
> >> >> structure is a space holder for response info. It enforces an
> >> >> order of operation if the structures are updated using memcpy to
> >> >> avoid overwriting response info. It is logical to move the
> >> >> reserved space out of request info. It also solves the ordering issue
> mentioned before.
> >> >I would like to understand what kind of ordering issue you have
> >> >faced with the current approach. Could you please give an
> >> >example/sequence
> >> and explain?
> >> >
> >>
> >> I have seen this issue with crypto adapter autotest (#n215).
> >>
> >> Example:
> >> rte_memcpy(&m_data.response_info, &response_info,
> >> sizeof(response_info)); rte_memcpy(&m_data.request_info,
> >> &request_info, sizeof(request_info));
> >>
> >> Here response info is getting overwritten by request info.
> >> Above lines can reordered to fix the issue, but can be ignored with this
> patch.
> >There is a reason for designing the metadata in this way.
> >Right now, sizeof (union rte_event_crypto_metadata) is 16 bytes.
> >So, the session based case needs just 16 bytes to store the data.
> >Whereas, for sessionless case each crypto_ops requires another 16 bytes.
> >
> >By changing the struct in the following way you are doubling the memory
> >requirement.
> >With the changes, for sessionless case, each crypto op requires 32
> >bytes of space instead of 16 bytes and the mempool will be bigger.
> >This will have the perf impact too!
> >
> >You can just copy the individual members(cdev_id & queue_pair_id) after
> >the response_info.
> >OR You have a better way?
> >
>
> I missed the second word of event structure. It adds an extra 8 bytes, which
> is not required.
I guess you meant not adding, it is overlapping with request information.
> Let me know, what you think of below change to metadata structure.
>
> struct rte_event_crypto_metadata {
> uint64_t response_info; // 8 bytes
With this, it lags clarity saying it is an event information.
> struct rte_event_crypto_request request_info; // 8 bytes };
>
> Total structure size is 16 bytes.
>
> Response info can be set like below in test app:
> m_data.response_info = response_info.event;
>
> The main aim of this patch is to decouple response info from request info.
The decoupling was required as it was doing memcpy.
If you copy the individual members of request info(after response), you don't require it.
>
> >>
> >> >>
> >> >> This patch removes the reserve field from request info and makes
> >> >> event crypto metadata type to structure from union to make space
> >> >> for response info.
> >> >>
> >> >> App and drivers are updated as per metadata change.
> >> >>
> >> >> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> >> >> Acked-by: Anoob Joseph <anoobj@marvell.com>
> >> >> ---
> >> >> v3:
> >> >> * Updated ABI section of release notes.
> >> >>
> >> >> v2:
> >> >> * Updated deprecation notice.
> >> >>
> >> >> v1:
> >> >> * Rebased.
> >> >>
> >> >> app/test/test_event_crypto_adapter.c | 14 +++++++-------
> >> >> doc/guides/rel_notes/deprecation.rst | 6 ------
> >> >> doc/guides/rel_notes/release_21_11.rst | 2 ++
> >> >> drivers/crypto/octeontx/otx_cryptodev_ops.c | 8 ++++----
> >> >> drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 4 ++--
> >> >> .../event/octeontx2/otx2_evdev_crypto_adptr_tx.h | 4 ++--
> >> >> lib/eventdev/rte_event_crypto_adapter.c | 8 ++++----
> >> >> lib/eventdev/rte_event_crypto_adapter.h | 15 +++++----------
> >> >> 8 files changed, 26 insertions(+), 35 deletions(-)
> >> >>
> >> >> diff --git a/app/test/test_event_crypto_adapter.c
> >> >> b/app/test/test_event_crypto_adapter.c
> >> >> index 3ad20921e2..0d73694d3a 100644
> >> >> --- a/app/test/test_event_crypto_adapter.c
> >> >> +++ b/app/test/test_event_crypto_adapter.c
> >> >> @@ -168,7 +168,7 @@ test_op_forward_mode(uint8_t session_less)
> {
> >> >> struct rte_crypto_sym_xform cipher_xform;
> >> >> struct rte_cryptodev_sym_session *sess;
> >> >> - union rte_event_crypto_metadata m_data;
> >> >> + struct rte_event_crypto_metadata m_data;
> >> >> struct rte_crypto_sym_op *sym_op;
> >> >> struct rte_crypto_op *op;
> >> >> struct rte_mbuf *m;
> >> >> @@ -368,7 +368,7 @@ test_op_new_mode(uint8_t session_less) {
> >> >> struct rte_crypto_sym_xform cipher_xform;
> >> >> struct rte_cryptodev_sym_session *sess;
> >> >> - union rte_event_crypto_metadata m_data;
> >> >> + struct rte_event_crypto_metadata m_data;
> >> >> struct rte_crypto_sym_op *sym_op;
> >> >> struct rte_crypto_op *op;
> >> >> struct rte_mbuf *m;
> >> >> @@ -406,7 +406,7 @@ test_op_new_mode(uint8_t session_less)
> >> >> if (cap &
> >> >> RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA) {
> >> >> /* Fill in private user data information */
> >> >> rte_memcpy(&m_data.response_info,
> >> &response_info,
> >> >> - sizeof(m_data));
> >> >> + sizeof(response_info));
> >> >> rte_cryptodev_sym_session_set_user_data(sess,
> >> >> &m_data, sizeof(m_data));
> >> >> }
> >> >> @@ -426,7 +426,7 @@ test_op_new_mode(uint8_t session_less)
> >> >> op->private_data_offset = len;
> >> >> /* Fill in private data information */
> >> >> rte_memcpy(&m_data.response_info, &response_info,
> >> >> - sizeof(m_data));
> >> >> + sizeof(response_info));
> >> >> rte_memcpy((uint8_t *)op + len, &m_data, sizeof(m_data));
> >> >> }
> >> >>
> >> >> @@ -519,7 +519,7 @@ configure_cryptodev(void)
> >> >> DEFAULT_NUM_XFORMS *
> >> >> sizeof(struct rte_crypto_sym_xform) +
> >> >> MAXIMUM_IV_LENGTH +
> >> >> - sizeof(union rte_event_crypto_metadata),
> >> >> + sizeof(struct rte_event_crypto_metadata),
> >> >> rte_socket_id());
> >> >> if (params.op_mpool == NULL) {
> >> >> RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
> >> @@ -549,12
> >> >> +549,12 @@ configure_cryptodev(void)
> >> >> * to include the session headers & private data
> >> >> */
> >> >> session_size =
> >> >> rte_cryptodev_sym_get_private_session_size(TEST_CDEV_ID);
> >> >> - session_size += sizeof(union rte_event_crypto_metadata);
> >> >> + session_size += sizeof(struct rte_event_crypto_metadata);
> >> >>
> >> >> params.session_mpool = rte_cryptodev_sym_session_pool_create(
> >> >> "CRYPTO_ADAPTER_SESSION_MP",
> >> >> MAX_NB_SESSIONS, 0, 0,
> >> >> - sizeof(union rte_event_crypto_metadata),
> >> >> + sizeof(struct rte_event_crypto_metadata),
> >> >> SOCKET_ID_ANY);
> >> >> TEST_ASSERT_NOT_NULL(params.session_mpool,
> >> >> "session mempool allocation failed\n"); diff --git
> >> >> a/doc/guides/rel_notes/deprecation.rst
> >> >> b/doc/guides/rel_notes/deprecation.rst
> >> >> index 76a4abfd6b..58ee95c020 100644
> >> >> --- a/doc/guides/rel_notes/deprecation.rst
> >> >> +++ b/doc/guides/rel_notes/deprecation.rst
> >> >> @@ -266,12 +266,6 @@ Deprecation Notices
> >> >> values to the function ``rte_event_eth_rx_adapter_queue_add``
> using
> >> >> the structure ``rte_event_eth_rx_adapter_queue_add``.
> >> >>
> >> >> -* eventdev: Reserved bytes of ``rte_event_crypto_request`` is a
> >> >> space holder
> >> >> - for ``response_info``. Both should be decoupled for better clarity.
> >> >> - New space for ``response_info`` can be made by changing
> >> >> - ``rte_event_crypto_metadata`` type to structure from union.
> >> >> - This change is targeted for DPDK 21.11.
> >> >> -
> >> >> * metrics: The function ``rte_metrics_init`` will have a non-void return
> >> >> in order to notify errors instead of calling ``rte_exit``.
> >> >>
> >> >> diff --git a/doc/guides/rel_notes/release_21_11.rst
> >> >> b/doc/guides/rel_notes/release_21_11.rst
> >> >> index d707a554ef..ab76d5dd55 100644
> >> >> --- a/doc/guides/rel_notes/release_21_11.rst
> >> >> +++ b/doc/guides/rel_notes/release_21_11.rst
> >> >> @@ -100,6 +100,8 @@ ABI Changes
> >> >> Also, make sure to start the actual text at the margin.
> >> >>
> >> =======================================================
> >> >>
> >> >> +* eventdev: Modified type of ``union rte_event_crypto_metadata``
> >> >> +to struct and
> >> >> + removed reserved bytes from ``struct rte_event_crypto_request``.
> >> >>
> >> >> Known Issues
> >> >> ------------
> >> >> diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c
> >> >> b/drivers/crypto/octeontx/otx_cryptodev_ops.c
> >> >> index eac6796cfb..c51be63146 100644
> >> >> --- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
> >> >> +++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
> >> >> @@ -710,17 +710,17 @@ submit_request_to_sso(struct ssows *ws,
> >> >> uintptr_t req,
> >> >> ssovf_store_pair(add_work, req, ws->grps[rsp_info->queue_id]);
> >> >> }
> >> >>
> >> >> -static inline union rte_event_crypto_metadata *
> >> >> +static inline struct rte_event_crypto_metadata *
> >> >> get_event_crypto_mdata(struct rte_crypto_op *op) {
> >> >> - union rte_event_crypto_metadata *ec_mdata;
> >> >> + struct rte_event_crypto_metadata *ec_mdata;
> >> >>
> >> >> if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
> >> >> ec_mdata = rte_cryptodev_sym_session_get_user_data(
> >> >> op->sym->session);
> >> >> else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
> >> >> op->private_data_offset)
> >> >> - ec_mdata = (union rte_event_crypto_metadata *)
> >> >> + ec_mdata = (struct rte_event_crypto_metadata *)
> >> >> ((uint8_t *)op + op->private_data_offset);
> >> >> else
> >> >> return NULL;
> >> >> @@ -731,7 +731,7 @@ get_event_crypto_mdata(struct rte_crypto_op
> >> *op)
> >> >> uint16_t __rte_hot otx_crypto_adapter_enqueue(void *port, struct
> >> >> rte_crypto_op *op) {
> >> >> - union rte_event_crypto_metadata *ec_mdata;
> >> >> + struct rte_event_crypto_metadata *ec_mdata;
> >> >> struct cpt_instance *instance;
> >> >> struct cpt_request_info *req;
> >> >> struct rte_event *rsp_info;
> >> >> diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> >> >> b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> >> >> index 42100154cd..952d1352f4 100644
> >> >> --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> >> >> +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> >> >> @@ -453,7 +453,7 @@ otx2_ca_enqueue_req(const struct
> otx2_cpt_qp
> >> *qp,
> >> >> struct rte_crypto_op *op,
> >> >> uint64_t cpt_inst_w7)
> >> >> {
> >> >> - union rte_event_crypto_metadata *m_data;
> >> >> + struct rte_event_crypto_metadata *m_data;
> >> >> union cpt_inst_s inst;
> >> >> uint64_t lmt_status;
> >> >>
> >> >> @@ -468,7 +468,7 @@ otx2_ca_enqueue_req(const struct
> otx2_cpt_qp
> >> *qp,
> >> >> }
> >> >> } else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
> >> >> op->private_data_offset) {
> >> >> - m_data = (union rte_event_crypto_metadata *)
> >> >> + m_data = (struct rte_event_crypto_metadata *)
> >> >> ((uint8_t *)op +
> >> >> op->private_data_offset);
> >> >> } else {
> >> >> diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> >> >> b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> >> >> index ecf7eb9f56..458e8306d7 100644
> >> >> --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> >> >> +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> >> >> @@ -16,7 +16,7 @@
> >> >> static inline uint16_t
> >> >> otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) {
> >> >> - union rte_event_crypto_metadata *m_data;
> >> >> + struct rte_event_crypto_metadata *m_data;
> >> >> struct rte_crypto_op *crypto_op;
> >> >> struct rte_cryptodev *cdev;
> >> >> struct otx2_cpt_qp *qp;
> >> >> @@ -37,7 +37,7 @@ otx2_ca_enq(uintptr_t tag_op, const struct
> >> >> rte_event
> >> >> *ev)
> >> >> qp_id = m_data->request_info.queue_pair_id;
> >> >> } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS
> >> &&
> >> >> crypto_op->private_data_offset) {
> >> >> - m_data = (union rte_event_crypto_metadata *)
> >> >> + m_data = (struct rte_event_crypto_metadata *)
> >> >> ((uint8_t *)crypto_op +
> >> >> crypto_op->private_data_offset);
> >> >> cdev_id = m_data->request_info.cdev_id; diff --git
> >> >> a/lib/eventdev/rte_event_crypto_adapter.c
> >> >> b/lib/eventdev/rte_event_crypto_adapter.c
> >> >> index e1d38d383d..6977391ae9 100644
> >> >> --- a/lib/eventdev/rte_event_crypto_adapter.c
> >> >> +++ b/lib/eventdev/rte_event_crypto_adapter.c
> >> >> @@ -333,7 +333,7 @@ eca_enq_to_cryptodev(struct
> >> >> rte_event_crypto_adapter *adapter,
> >> >> struct rte_event *ev, unsigned int cnt) {
> >> >> struct rte_event_crypto_adapter_stats *stats = &adapter-
> >> >> >crypto_stats;
> >> >> - union rte_event_crypto_metadata *m_data = NULL;
> >> >> + struct rte_event_crypto_metadata *m_data = NULL;
> >> >> struct crypto_queue_pair_info *qp_info = NULL;
> >> >> struct rte_crypto_op *crypto_op;
> >> >> unsigned int i, n;
> >> >> @@ -371,7 +371,7 @@ eca_enq_to_cryptodev(struct
> >> >> rte_event_crypto_adapter *adapter,
> >> >> len++;
> >> >> } else if (crypto_op->sess_type ==
> >> RTE_CRYPTO_OP_SESSIONLESS &&
> >> >> crypto_op->private_data_offset) {
> >> >> - m_data = (union rte_event_crypto_metadata *)
> >> >> + m_data = (struct rte_event_crypto_metadata *)
> >> >> ((uint8_t *)crypto_op +
> >> >> crypto_op->private_data_offset);
> >> >> cdev_id = m_data->request_info.cdev_id; @@ -
> >> >> 504,7 +504,7 @@ eca_ops_enqueue_burst(struct
> >> rte_event_crypto_adapter
> >> >> *adapter,
> >> >> struct rte_crypto_op **ops, uint16_t num) {
> >> >> struct rte_event_crypto_adapter_stats *stats = &adapter-
> >> >> >crypto_stats;
> >> >> - union rte_event_crypto_metadata *m_data = NULL;
> >> >> + struct rte_event_crypto_metadata *m_data = NULL;
> >> >> uint8_t event_dev_id = adapter->eventdev_id;
> >> >> uint8_t event_port_id = adapter->event_port_id;
> >> >> struct rte_event events[BATCH_SIZE]; @@ -523,7 +523,7 @@
> >> >> eca_ops_enqueue_burst(struct rte_event_crypto_adapter *adapter,
> >> >> ops[i]->sym->session);
> >> >> } else if (ops[i]->sess_type ==
> RTE_CRYPTO_OP_SESSIONLESS &&
> >> >> ops[i]->private_data_offset) {
> >> >> - m_data = (union rte_event_crypto_metadata *)
> >> >> + m_data = (struct rte_event_crypto_metadata *)
> >> >> ((uint8_t *)ops[i] +
> >> >> ops[i]->private_data_offset);
> >> >> }
> >> >> diff --git a/lib/eventdev/rte_event_crypto_adapter.h
> >> >> b/lib/eventdev/rte_event_crypto_adapter.h
> >> >> index f8c6cca87c..3c24d9d9df 100644
> >> >> --- a/lib/eventdev/rte_event_crypto_adapter.h
> >> >> +++ b/lib/eventdev/rte_event_crypto_adapter.h
> >> >> @@ -200,11 +200,6 @@ enum rte_event_crypto_adapter_mode {
> >> >> * provide event request information to the adapter.
> >> >> */
> >> >> struct rte_event_crypto_request {
> >> >> - uint8_t resv[8];
> >> >> - /**< Overlaps with first 8 bytes of struct rte_event
> >> >> - * that encode the response event information. Application
> >> >> - * is expected to fill in struct rte_event response_info.
> >> >> - */
> >> >> uint16_t cdev_id;
> >> >> /**< cryptodev ID to be used */
> >> >> uint16_t queue_pair_id;
> >> >> @@ -223,16 +218,16 @@ struct rte_event_crypto_request {
> >> >> * operation. If the transfer is done by SW, event response
> information
> >> >> * will be used by the adapter.
> >> >> */
> >> >> -union rte_event_crypto_metadata {
> >> >> - struct rte_event_crypto_request request_info;
> >> >> - /**< Request information to be filled in by application
> >> >> - * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
> >> >> - */
> >> >> +struct rte_event_crypto_metadata {
> >> >> struct rte_event response_info;
> >> >> /**< Response information to be filled in by application
> >> >> * for RTE_EVENT_CRYPTO_ADAPTER_OP_NEW and
> >> >> * RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
> >> >> */
> >> >> + struct rte_event_crypto_request request_info;
> >> >> + /**< Request information to be filled in by application
> >> >> + * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
> >> >> + */
> >> >> };
> >> >>
> >> >> /**
> >> >> --
> >> >> 2.25.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 1/3] security: add option to configure tunnel header verification
2021-09-08 8:21 5% ` [dpdk-dev] [PATCH 1/3] security: " Tejasree Kondoj
@ 2021-09-08 7:46 0% ` Hemant Agrawal
2021-09-08 10:42 3% ` Akhil Goyal
1 sibling, 0 replies; 200+ results
From: Hemant Agrawal @ 2021-09-08 7:46 UTC (permalink / raw)
To: dev
On 9/8/2021 1:51 PM, Tejasree Kondoj wrote:
> Add option to indicate whether outer header verification
> need to be done as part of inbound IPsec processing.
>
> With inline IPsec processing, SA lookup would be happening
> in the Rx path of rte_ethdev. When rte_flow is configured to
> support more than one SA, SPI would be used to lookup SA.
> In such cases, additional verification would be required to
> ensure duplicate SPIs are not getting processed in the inline path.
>
> For lookaside cases, the same option can be used by application
> to offload tunnel verification to the PMD.
>
> These verifications would help in averting possible DoS attacks.
>
> Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
> ---
> doc/guides/rel_notes/release_21_11.rst | 5 +++++
> lib/security/rte_security.h | 17 +++++++++++++++++
> 2 files changed, 22 insertions(+)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 0e3ed28378..b0606cb542 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -136,6 +136,11 @@ ABI Changes
> soft and hard SA expiry limits. Limits can be either in units of packets or
> bytes.
>
> +* security: add IPsec SA option to configure tunnel header verification
> +
> + * Added SA option to indicate whether outer header verification need to be
> + done as part of inbound IPsec processing.
> +
>
> Known Issues
> ------------
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 95c169d6cf..2a61cad885 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -55,6 +55,14 @@ enum rte_security_ipsec_tunnel_type {
> /**< Outer header is IPv6 */
> };
>
> +/**
> + * IPSEC tunnel header verification mode
> + *
> + * Controls how outer IP header is verified in inbound.
> + */
> +#define RTE_SECURITY_IPSEC_TUNNEL_VERIFY_DST_ADDR 0x1
> +#define RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR 0x2
> +
> /**
> * Security context for crypto/eth devices
> *
> @@ -195,6 +203,15 @@ struct rte_security_ipsec_sa_options {
> * by the PMD.
> */
> uint32_t iv_gen_disable : 1;
> +
> + /** Verify tunnel header in inbound
> + * * ``RTE_SECURITY_IPSEC_TUNNEL_VERIFY_DST_ADDR``: Verify destination
> + * IP address.
> + *
> + * * ``RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR``: Verify both
> + * source and destination IP addresses.
> + */
> + uint32_t tunnel_hdr_verify : 2;
> };
>
> /** IPSec security association direction */
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3] eventdev: update crypto adapter metadata structures
2021-09-07 17:30 0% ` Gujjar, Abhinandan S
@ 2021-09-08 7:42 0% ` Shijith Thotton
2021-09-08 7:53 0% ` Gujjar, Abhinandan S
0 siblings, 1 reply; 200+ results
From: Shijith Thotton @ 2021-09-08 7:42 UTC (permalink / raw)
To: Gujjar, Abhinandan S, dev
Cc: Jerin Jacob Kollanukkaran, Anoob Joseph,
Pavan Nikhilesh Bhagavatula, Akhil Goyal, Ray Kinsella,
Ankur Dwivedi
>>
>> >> In crypto adapter metadata, reserved bytes in request info structure
>> >> is a space holder for response info. It enforces an order of
>> >> operation if the structures are updated using memcpy to avoid
>> >> overwriting response info. It is logical to move the reserved space
>> >> out of request info. It also solves the ordering issue mentioned before.
>> >I would like to understand what kind of ordering issue you have faced
>> >with the current approach. Could you please give an example/sequence
>> and explain?
>> >
>>
>> I have seen this issue with crypto adapter autotest (#n215).
>>
>> Example:
>> rte_memcpy(&m_data.response_info, &response_info,
>> sizeof(response_info)); rte_memcpy(&m_data.request_info,
>> &request_info, sizeof(request_info));
>>
>> Here response info is getting overwritten by request info.
>> Above lines can reordered to fix the issue, but can be ignored with this patch.
>There is a reason for designing the metadata in this way.
>Right now, sizeof (union rte_event_crypto_metadata) is 16 bytes.
>So, the session based case needs just 16 bytes to store the data.
>Whereas, for sessionless case each crypto_ops requires another 16 bytes.
>
>By changing the struct in the following way you are doubling the memory
>requirement.
>With the changes, for sessionless case, each crypto op requires 32 bytes of space
>instead of 16 bytes and the mempool will be bigger.
>This will have the perf impact too!
>
>You can just copy the individual members(cdev_id & queue_pair_id) after the
>response_info.
>OR You have a better way?
>
I missed the second word of event structure. It adds an extra 8 bytes, which is not required.
Let me know, what you think of below change to metadata structure.
struct rte_event_crypto_metadata {
uint64_t response_info; // 8 bytes
struct rte_event_crypto_request request_info; // 8 bytes
};
Total structure size is 16 bytes.
Response info can be set like below in test app:
m_data.response_info = response_info.event;
The main aim of this patch is to decouple response info from request info.
>>
>> >>
>> >> This patch removes the reserve field from request info and makes
>> >> event crypto metadata type to structure from union to make space for
>> >> response info.
>> >>
>> >> App and drivers are updated as per metadata change.
>> >>
>> >> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
>> >> Acked-by: Anoob Joseph <anoobj@marvell.com>
>> >> ---
>> >> v3:
>> >> * Updated ABI section of release notes.
>> >>
>> >> v2:
>> >> * Updated deprecation notice.
>> >>
>> >> v1:
>> >> * Rebased.
>> >>
>> >> app/test/test_event_crypto_adapter.c | 14 +++++++-------
>> >> doc/guides/rel_notes/deprecation.rst | 6 ------
>> >> doc/guides/rel_notes/release_21_11.rst | 2 ++
>> >> drivers/crypto/octeontx/otx_cryptodev_ops.c | 8 ++++----
>> >> drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 4 ++--
>> >> .../event/octeontx2/otx2_evdev_crypto_adptr_tx.h | 4 ++--
>> >> lib/eventdev/rte_event_crypto_adapter.c | 8 ++++----
>> >> lib/eventdev/rte_event_crypto_adapter.h | 15 +++++----------
>> >> 8 files changed, 26 insertions(+), 35 deletions(-)
>> >>
>> >> diff --git a/app/test/test_event_crypto_adapter.c
>> >> b/app/test/test_event_crypto_adapter.c
>> >> index 3ad20921e2..0d73694d3a 100644
>> >> --- a/app/test/test_event_crypto_adapter.c
>> >> +++ b/app/test/test_event_crypto_adapter.c
>> >> @@ -168,7 +168,7 @@ test_op_forward_mode(uint8_t session_less) {
>> >> struct rte_crypto_sym_xform cipher_xform;
>> >> struct rte_cryptodev_sym_session *sess;
>> >> - union rte_event_crypto_metadata m_data;
>> >> + struct rte_event_crypto_metadata m_data;
>> >> struct rte_crypto_sym_op *sym_op;
>> >> struct rte_crypto_op *op;
>> >> struct rte_mbuf *m;
>> >> @@ -368,7 +368,7 @@ test_op_new_mode(uint8_t session_less) {
>> >> struct rte_crypto_sym_xform cipher_xform;
>> >> struct rte_cryptodev_sym_session *sess;
>> >> - union rte_event_crypto_metadata m_data;
>> >> + struct rte_event_crypto_metadata m_data;
>> >> struct rte_crypto_sym_op *sym_op;
>> >> struct rte_crypto_op *op;
>> >> struct rte_mbuf *m;
>> >> @@ -406,7 +406,7 @@ test_op_new_mode(uint8_t session_less)
>> >> if (cap &
>> >> RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA) {
>> >> /* Fill in private user data information */
>> >> rte_memcpy(&m_data.response_info,
>> &response_info,
>> >> - sizeof(m_data));
>> >> + sizeof(response_info));
>> >> rte_cryptodev_sym_session_set_user_data(sess,
>> >> &m_data, sizeof(m_data));
>> >> }
>> >> @@ -426,7 +426,7 @@ test_op_new_mode(uint8_t session_less)
>> >> op->private_data_offset = len;
>> >> /* Fill in private data information */
>> >> rte_memcpy(&m_data.response_info, &response_info,
>> >> - sizeof(m_data));
>> >> + sizeof(response_info));
>> >> rte_memcpy((uint8_t *)op + len, &m_data, sizeof(m_data));
>> >> }
>> >>
>> >> @@ -519,7 +519,7 @@ configure_cryptodev(void)
>> >> DEFAULT_NUM_XFORMS *
>> >> sizeof(struct rte_crypto_sym_xform) +
>> >> MAXIMUM_IV_LENGTH +
>> >> - sizeof(union rte_event_crypto_metadata),
>> >> + sizeof(struct rte_event_crypto_metadata),
>> >> rte_socket_id());
>> >> if (params.op_mpool == NULL) {
>> >> RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
>> @@ -549,12
>> >> +549,12 @@ configure_cryptodev(void)
>> >> * to include the session headers & private data
>> >> */
>> >> session_size =
>> >> rte_cryptodev_sym_get_private_session_size(TEST_CDEV_ID);
>> >> - session_size += sizeof(union rte_event_crypto_metadata);
>> >> + session_size += sizeof(struct rte_event_crypto_metadata);
>> >>
>> >> params.session_mpool = rte_cryptodev_sym_session_pool_create(
>> >> "CRYPTO_ADAPTER_SESSION_MP",
>> >> MAX_NB_SESSIONS, 0, 0,
>> >> - sizeof(union rte_event_crypto_metadata),
>> >> + sizeof(struct rte_event_crypto_metadata),
>> >> SOCKET_ID_ANY);
>> >> TEST_ASSERT_NOT_NULL(params.session_mpool,
>> >> "session mempool allocation failed\n"); diff --git
>> >> a/doc/guides/rel_notes/deprecation.rst
>> >> b/doc/guides/rel_notes/deprecation.rst
>> >> index 76a4abfd6b..58ee95c020 100644
>> >> --- a/doc/guides/rel_notes/deprecation.rst
>> >> +++ b/doc/guides/rel_notes/deprecation.rst
>> >> @@ -266,12 +266,6 @@ Deprecation Notices
>> >> values to the function ``rte_event_eth_rx_adapter_queue_add`` using
>> >> the structure ``rte_event_eth_rx_adapter_queue_add``.
>> >>
>> >> -* eventdev: Reserved bytes of ``rte_event_crypto_request`` is a
>> >> space holder
>> >> - for ``response_info``. Both should be decoupled for better clarity.
>> >> - New space for ``response_info`` can be made by changing
>> >> - ``rte_event_crypto_metadata`` type to structure from union.
>> >> - This change is targeted for DPDK 21.11.
>> >> -
>> >> * metrics: The function ``rte_metrics_init`` will have a non-void return
>> >> in order to notify errors instead of calling ``rte_exit``.
>> >>
>> >> diff --git a/doc/guides/rel_notes/release_21_11.rst
>> >> b/doc/guides/rel_notes/release_21_11.rst
>> >> index d707a554ef..ab76d5dd55 100644
>> >> --- a/doc/guides/rel_notes/release_21_11.rst
>> >> +++ b/doc/guides/rel_notes/release_21_11.rst
>> >> @@ -100,6 +100,8 @@ ABI Changes
>> >> Also, make sure to start the actual text at the margin.
>> >>
>> =======================================================
>> >>
>> >> +* eventdev: Modified type of ``union rte_event_crypto_metadata`` to
>> >> +struct and
>> >> + removed reserved bytes from ``struct rte_event_crypto_request``.
>> >>
>> >> Known Issues
>> >> ------------
>> >> diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c
>> >> b/drivers/crypto/octeontx/otx_cryptodev_ops.c
>> >> index eac6796cfb..c51be63146 100644
>> >> --- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
>> >> +++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
>> >> @@ -710,17 +710,17 @@ submit_request_to_sso(struct ssows *ws,
>> >> uintptr_t req,
>> >> ssovf_store_pair(add_work, req, ws->grps[rsp_info->queue_id]); }
>> >>
>> >> -static inline union rte_event_crypto_metadata *
>> >> +static inline struct rte_event_crypto_metadata *
>> >> get_event_crypto_mdata(struct rte_crypto_op *op) {
>> >> - union rte_event_crypto_metadata *ec_mdata;
>> >> + struct rte_event_crypto_metadata *ec_mdata;
>> >>
>> >> if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
>> >> ec_mdata = rte_cryptodev_sym_session_get_user_data(
>> >> op->sym->session);
>> >> else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
>> >> op->private_data_offset)
>> >> - ec_mdata = (union rte_event_crypto_metadata *)
>> >> + ec_mdata = (struct rte_event_crypto_metadata *)
>> >> ((uint8_t *)op + op->private_data_offset);
>> >> else
>> >> return NULL;
>> >> @@ -731,7 +731,7 @@ get_event_crypto_mdata(struct rte_crypto_op
>> *op)
>> >> uint16_t __rte_hot otx_crypto_adapter_enqueue(void *port, struct
>> >> rte_crypto_op *op) {
>> >> - union rte_event_crypto_metadata *ec_mdata;
>> >> + struct rte_event_crypto_metadata *ec_mdata;
>> >> struct cpt_instance *instance;
>> >> struct cpt_request_info *req;
>> >> struct rte_event *rsp_info;
>> >> diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
>> >> b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
>> >> index 42100154cd..952d1352f4 100644
>> >> --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
>> >> +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
>> >> @@ -453,7 +453,7 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp
>> *qp,
>> >> struct rte_crypto_op *op,
>> >> uint64_t cpt_inst_w7)
>> >> {
>> >> - union rte_event_crypto_metadata *m_data;
>> >> + struct rte_event_crypto_metadata *m_data;
>> >> union cpt_inst_s inst;
>> >> uint64_t lmt_status;
>> >>
>> >> @@ -468,7 +468,7 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp
>> *qp,
>> >> }
>> >> } else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
>> >> op->private_data_offset) {
>> >> - m_data = (union rte_event_crypto_metadata *)
>> >> + m_data = (struct rte_event_crypto_metadata *)
>> >> ((uint8_t *)op +
>> >> op->private_data_offset);
>> >> } else {
>> >> diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
>> >> b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
>> >> index ecf7eb9f56..458e8306d7 100644
>> >> --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
>> >> +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
>> >> @@ -16,7 +16,7 @@
>> >> static inline uint16_t
>> >> otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) {
>> >> - union rte_event_crypto_metadata *m_data;
>> >> + struct rte_event_crypto_metadata *m_data;
>> >> struct rte_crypto_op *crypto_op;
>> >> struct rte_cryptodev *cdev;
>> >> struct otx2_cpt_qp *qp;
>> >> @@ -37,7 +37,7 @@ otx2_ca_enq(uintptr_t tag_op, const struct
>> >> rte_event
>> >> *ev)
>> >> qp_id = m_data->request_info.queue_pair_id;
>> >> } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS
>> &&
>> >> crypto_op->private_data_offset) {
>> >> - m_data = (union rte_event_crypto_metadata *)
>> >> + m_data = (struct rte_event_crypto_metadata *)
>> >> ((uint8_t *)crypto_op +
>> >> crypto_op->private_data_offset);
>> >> cdev_id = m_data->request_info.cdev_id; diff --git
>> >> a/lib/eventdev/rte_event_crypto_adapter.c
>> >> b/lib/eventdev/rte_event_crypto_adapter.c
>> >> index e1d38d383d..6977391ae9 100644
>> >> --- a/lib/eventdev/rte_event_crypto_adapter.c
>> >> +++ b/lib/eventdev/rte_event_crypto_adapter.c
>> >> @@ -333,7 +333,7 @@ eca_enq_to_cryptodev(struct
>> >> rte_event_crypto_adapter *adapter,
>> >> struct rte_event *ev, unsigned int cnt) {
>> >> struct rte_event_crypto_adapter_stats *stats = &adapter-
>> >> >crypto_stats;
>> >> - union rte_event_crypto_metadata *m_data = NULL;
>> >> + struct rte_event_crypto_metadata *m_data = NULL;
>> >> struct crypto_queue_pair_info *qp_info = NULL;
>> >> struct rte_crypto_op *crypto_op;
>> >> unsigned int i, n;
>> >> @@ -371,7 +371,7 @@ eca_enq_to_cryptodev(struct
>> >> rte_event_crypto_adapter *adapter,
>> >> len++;
>> >> } else if (crypto_op->sess_type ==
>> RTE_CRYPTO_OP_SESSIONLESS &&
>> >> crypto_op->private_data_offset) {
>> >> - m_data = (union rte_event_crypto_metadata *)
>> >> + m_data = (struct rte_event_crypto_metadata *)
>> >> ((uint8_t *)crypto_op +
>> >> crypto_op->private_data_offset);
>> >> cdev_id = m_data->request_info.cdev_id; @@ -
>> >> 504,7 +504,7 @@ eca_ops_enqueue_burst(struct
>> rte_event_crypto_adapter
>> >> *adapter,
>> >> struct rte_crypto_op **ops, uint16_t num) {
>> >> struct rte_event_crypto_adapter_stats *stats = &adapter-
>> >> >crypto_stats;
>> >> - union rte_event_crypto_metadata *m_data = NULL;
>> >> + struct rte_event_crypto_metadata *m_data = NULL;
>> >> uint8_t event_dev_id = adapter->eventdev_id;
>> >> uint8_t event_port_id = adapter->event_port_id;
>> >> struct rte_event events[BATCH_SIZE]; @@ -523,7 +523,7 @@
>> >> eca_ops_enqueue_burst(struct rte_event_crypto_adapter *adapter,
>> >> ops[i]->sym->session);
>> >> } else if (ops[i]->sess_type ==
>> >> RTE_CRYPTO_OP_SESSIONLESS &&
>> >> ops[i]->private_data_offset) {
>> >> - m_data = (union rte_event_crypto_metadata *)
>> >> + m_data = (struct rte_event_crypto_metadata *)
>> >> ((uint8_t *)ops[i] +
>> >> ops[i]->private_data_offset);
>> >> }
>> >> diff --git a/lib/eventdev/rte_event_crypto_adapter.h
>> >> b/lib/eventdev/rte_event_crypto_adapter.h
>> >> index f8c6cca87c..3c24d9d9df 100644
>> >> --- a/lib/eventdev/rte_event_crypto_adapter.h
>> >> +++ b/lib/eventdev/rte_event_crypto_adapter.h
>> >> @@ -200,11 +200,6 @@ enum rte_event_crypto_adapter_mode {
>> >> * provide event request information to the adapter.
>> >> */
>> >> struct rte_event_crypto_request {
>> >> - uint8_t resv[8];
>> >> - /**< Overlaps with first 8 bytes of struct rte_event
>> >> - * that encode the response event information. Application
>> >> - * is expected to fill in struct rte_event response_info.
>> >> - */
>> >> uint16_t cdev_id;
>> >> /**< cryptodev ID to be used */
>> >> uint16_t queue_pair_id;
>> >> @@ -223,16 +218,16 @@ struct rte_event_crypto_request {
>> >> * operation. If the transfer is done by SW, event response information
>> >> * will be used by the adapter.
>> >> */
>> >> -union rte_event_crypto_metadata {
>> >> - struct rte_event_crypto_request request_info;
>> >> - /**< Request information to be filled in by application
>> >> - * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
>> >> - */
>> >> +struct rte_event_crypto_metadata {
>> >> struct rte_event response_info;
>> >> /**< Response information to be filled in by application
>> >> * for RTE_EVENT_CRYPTO_ADAPTER_OP_NEW and
>> >> * RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
>> >> */
>> >> + struct rte_event_crypto_request request_info;
>> >> + /**< Request information to be filled in by application
>> >> + * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
>> >> + */
>> >> };
>> >>
>> >> /**
>> >> --
>> >> 2.25.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 1/3] security: add option to configure UDP ports verification
2021-09-08 8:25 5% ` [dpdk-dev] [PATCH 1/3] security: " Tejasree Kondoj
@ 2021-09-08 7:42 0% ` Hemant Agrawal
0 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2021-09-08 7:42 UTC (permalink / raw)
To: Tejasree Kondoj, Akhil Goyal, Radu Nicolau, Declan Doherty
Cc: Anoob Joseph, Ankur Dwivedi, Jerin Jacob, Konstantin Ananyev,
Ciara Power, Hemant Agrawal, Gagandeep Singh, Fan Zhang,
Archana Muniganti, dev
On 9/8/2021 1:55 PM, Tejasree Kondoj wrote:
> Add option to indicate whether UDP encapsulation ports
> verification need to be done as part of inbound
> IPsec processing.
>
> Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
> doc/guides/rel_notes/release_21_11.rst | 5 +++++
> lib/security/rte_security.h | 7 +++++++
> 2 files changed, 12 insertions(+)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index b0606cb542..afeba0105b 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -141,6 +141,11 @@ ABI Changes
> * Added SA option to indicate whether outer header verification need to be
> done as part of inbound IPsec processing.
>
> +* security: add IPsec SA option to configure UDP ports verification
> +
> + * Added SA option to indicate whether UDP ports verification need to be
> + done as part of inbound IPsec processing.
> +
>
> Known Issues
> ------------
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 2a61cad885..18b0f02c44 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -139,6 +139,13 @@ struct rte_security_ipsec_sa_options {
> */
> uint32_t udp_encap : 1;
>
> + /** Verify UDP encapsulation ports in inbound
> + *
> + * * 1: Match UDP source and destination ports
> + * * 0: Do not match UDP ports
> + */
> + uint32_t udp_ports_verify : 1;
> +
> /** Copy DSCP bits
> *
> * * 1: Copy IPv4 or IPv6 DSCP bits from inner IP header to
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH 1/3] security: add option to configure UDP ports verification
@ 2021-09-08 8:25 5% ` Tejasree Kondoj
2021-09-08 7:42 0% ` Hemant Agrawal
0 siblings, 1 reply; 200+ results
From: Tejasree Kondoj @ 2021-09-08 8:25 UTC (permalink / raw)
To: Akhil Goyal, Radu Nicolau, Declan Doherty
Cc: Tejasree Kondoj, Anoob Joseph, Ankur Dwivedi, Jerin Jacob,
Konstantin Ananyev, Ciara Power, Hemant Agrawal, Gagandeep Singh,
Fan Zhang, Archana Muniganti, dev
Add option to indicate whether UDP encapsulation ports
verification need to be done as part of inbound
IPsec processing.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
doc/guides/rel_notes/release_21_11.rst | 5 +++++
lib/security/rte_security.h | 7 +++++++
2 files changed, 12 insertions(+)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index b0606cb542..afeba0105b 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -141,6 +141,11 @@ ABI Changes
* Added SA option to indicate whether outer header verification need to be
done as part of inbound IPsec processing.
+* security: add IPsec SA option to configure UDP ports verification
+
+ * Added SA option to indicate whether UDP ports verification need to be
+ done as part of inbound IPsec processing.
+
Known Issues
------------
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 2a61cad885..18b0f02c44 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -139,6 +139,13 @@ struct rte_security_ipsec_sa_options {
*/
uint32_t udp_encap : 1;
+ /** Verify UDP encapsulation ports in inbound
+ *
+ * * 1: Match UDP source and destination ports
+ * * 0: Do not match UDP ports
+ */
+ uint32_t udp_ports_verify : 1;
+
/** Copy DSCP bits
*
* * 1: Copy IPv4 or IPv6 DSCP bits from inner IP header to
--
2.27.0
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH 1/3] security: add option to configure tunnel header verification
@ 2021-09-08 8:21 5% ` Tejasree Kondoj
2021-09-08 7:46 0% ` Hemant Agrawal
2021-09-08 10:42 3% ` Akhil Goyal
0 siblings, 2 replies; 200+ results
From: Tejasree Kondoj @ 2021-09-08 8:21 UTC (permalink / raw)
To: Akhil Goyal, Radu Nicolau, Declan Doherty
Cc: Tejasree Kondoj, Anoob Joseph, Ankur Dwivedi, Jerin Jacob,
Konstantin Ananyev, Ciara Power, Hemant Agrawal, Gagandeep Singh,
Fan Zhang, Archana Muniganti, dev
Add option to indicate whether outer header verification
need to be done as part of inbound IPsec processing.
With inline IPsec processing, SA lookup would be happening
in the Rx path of rte_ethdev. When rte_flow is configured to
support more than one SA, SPI would be used to lookup SA.
In such cases, additional verification would be required to
ensure duplicate SPIs are not getting processed in the inline path.
For lookaside cases, the same option can be used by application
to offload tunnel verification to the PMD.
These verifications would help in averting possible DoS attacks.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
doc/guides/rel_notes/release_21_11.rst | 5 +++++
lib/security/rte_security.h | 17 +++++++++++++++++
2 files changed, 22 insertions(+)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 0e3ed28378..b0606cb542 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -136,6 +136,11 @@ ABI Changes
soft and hard SA expiry limits. Limits can be either in units of packets or
bytes.
+* security: add IPsec SA option to configure tunnel header verification
+
+ * Added SA option to indicate whether outer header verification need to be
+ done as part of inbound IPsec processing.
+
Known Issues
------------
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 95c169d6cf..2a61cad885 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -55,6 +55,14 @@ enum rte_security_ipsec_tunnel_type {
/**< Outer header is IPv6 */
};
+/**
+ * IPSEC tunnel header verification mode
+ *
+ * Controls how outer IP header is verified in inbound.
+ */
+#define RTE_SECURITY_IPSEC_TUNNEL_VERIFY_DST_ADDR 0x1
+#define RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR 0x2
+
/**
* Security context for crypto/eth devices
*
@@ -195,6 +203,15 @@ struct rte_security_ipsec_sa_options {
* by the PMD.
*/
uint32_t iv_gen_disable : 1;
+
+ /** Verify tunnel header in inbound
+ * * ``RTE_SECURITY_IPSEC_TUNNEL_VERIFY_DST_ADDR``: Verify destination
+ * IP address.
+ *
+ * * ``RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR``: Verify both
+ * source and destination IP addresses.
+ */
+ uint32_t tunnel_hdr_verify : 2;
};
/** IPSec security association direction */
--
2.27.0
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [EXT] [PATCH] crypto/dpaa_sec: update release notes
2021-09-07 19:15 3% ` [dpdk-dev] [EXT] " Akhil Goyal
@ 2021-09-08 7:02 0% ` Hemant Agrawal
0 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2021-09-08 7:02 UTC (permalink / raw)
To: Akhil Goyal, Hemant Agrawal; +Cc: dev
On 9/8/2021 12:45 AM, Akhil Goyal wrote:
>> Update the release notes in DPAAx for the changes in recent patches.
>>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> ---
>> doc/guides/rel_notes/release_21_11.rst | 4 ++++
>> 1 file changed, 4 insertions(+)
>>
>> diff --git a/doc/guides/rel_notes/release_21_11.rst
>> b/doc/guides/rel_notes/release_21_11.rst
>> index 0afd21812f..c747a0fb11 100644
>> --- a/doc/guides/rel_notes/release_21_11.rst
>> +++ b/doc/guides/rel_notes/release_21_11.rst
>> @@ -75,10 +75,14 @@ New Features
>> * **Updated NXP dpaa2_sec crypto PMD.**
>>
>> * Added raw vector datapath API support
>> + * Added PDCP short MAC-I support
>>
>> * **Updated NXP dpaa_sec crypto PMD.**
>>
>> * Added raw vector datapath API support
>> + * Added PDCP short MAC-I support
>> + * Added non-HMAC, DES, AES-XCBC and AES-CMAC algo support
>> +
>>
> Please send a next version for each of the series including each of the
> Item separately in the release notes.
> For example, DES-CBC patch will have update in release notes only for it
> Subsequently for others also.
> Please also remove entry from deprecation notices and subsequently update
> the release notes ABI changes section done in a particular patch.
>
> I tried splitting it, but it would also need changes in deprecation notices.
> Please rebase the 3 series in following order (as for 3rd series I am waiting for review from Intel)
> 1. Algo support in dpaaX
> 2. short MAC
> 3. raw crypto
ok
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] net: promote make rarp packet API as stable
2021-09-08 10:59 3% [dpdk-dev] [PATCH] net: promote make rarp packet API as stable Xiao Wang
2021-09-08 4:52 0% ` Stephen Hemminger
@ 2021-09-08 5:07 0% ` Xia, Chenbo
1 sibling, 0 replies; 200+ results
From: Xia, Chenbo @ 2021-09-08 5:07 UTC (permalink / raw)
To: Wang, Xiao W, olivier.matz; +Cc: dev
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Xiao Wang
> Sent: Wednesday, September 8, 2021 6:59 PM
> To: olivier.matz@6wind.com
> Cc: dev@dpdk.org; Wang, Xiao W <xiao.w.wang@intel.com>
> Subject: [dpdk-dev] [PATCH] net: promote make rarp packet API as stable
>
> rte_net_make_rarp_packet was introduced in version v18.02, there was no
> change in this public API since then, and it's still being used by vhost
> lib and virtio driver, so promote it as stable ABI.
>
> Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
> ---
> lib/net/rte_arp.h | 4 ----
> lib/net/version.map | 2 +-
> 2 files changed, 1 insertion(+), 5 deletions(-)
>
> diff --git a/lib/net/rte_arp.h b/lib/net/rte_arp.h
> index feb0eb3e49..076c8ab314 100644
> --- a/lib/net/rte_arp.h
> +++ b/lib/net/rte_arp.h
> @@ -50,9 +50,6 @@ struct rte_arp_hdr {
> } __rte_packed __rte_aligned(2);
>
> /**
> - * @warning
> - * @b EXPERIMENTAL: this API may change without prior notice
> - *
> * Make a RARP packet based on MAC addr.
> *
> * @param mpool
> @@ -63,7 +60,6 @@ struct rte_arp_hdr {
> * @return
> * - RARP packet pointer on success, or NULL on error
> */
> -__rte_experimental
> struct rte_mbuf *
> rte_net_make_rarp_packet(struct rte_mempool *mpool,
> const struct rte_ether_addr *mac);
> diff --git a/lib/net/version.map b/lib/net/version.map
> index 355b7c25b4..7584018d58 100644
> --- a/lib/net/version.map
> +++ b/lib/net/version.map
> @@ -6,6 +6,7 @@ DPDK_22 {
> rte_net_crc_calc;
> rte_net_crc_set_alg;
> rte_net_get_ptype;
> + rte_net_make_rarp_packet;
>
> local: *;
> };
> @@ -13,7 +14,6 @@ DPDK_22 {
> EXPERIMENTAL {
> global:
>
> - rte_net_make_rarp_packet;
> rte_net_skip_ip6_ext;
> rte_ether_unformat_addr;
> };
> --
> 2.15.1
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] net: promote make rarp packet API as stable
2021-09-08 10:59 3% [dpdk-dev] [PATCH] net: promote make rarp packet API as stable Xiao Wang
@ 2021-09-08 4:52 0% ` Stephen Hemminger
2021-09-08 5:07 0% ` Xia, Chenbo
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-08 4:52 UTC (permalink / raw)
To: Xiao Wang; +Cc: olivier.matz, dev
On Wed, 8 Sep 2021 18:59:15 +0800
Xiao Wang <xiao.w.wang@intel.com> wrote:
> rte_net_make_rarp_packet was introduced in version v18.02, there was no
> change in this public API since then, and it's still being used by vhost
> lib and virtio driver, so promote it as stable ABI.
>
> Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v3 7/8] doc: changes for new pcapng and dumpcap
2021-09-08 4:50 1% ` [dpdk-dev] [PATCH v3 5/8] pdump: support pcapng and filtering Stephen Hemminger
@ 2021-09-08 4:50 1% ` Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-08 4:50 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Describe the new packet capture library and utilities
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
.../howto/img/packet_capture_framework.svg | 96 +++++++++----------
doc/guides/howto/packet_capture_framework.rst | 67 ++++++-------
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/pcapng_lib.rst | 24 +++++
doc/guides/prog_guide/pdump_lib.rst | 28 ++++--
doc/guides/rel_notes/release_21_11.rst | 10 ++
doc/guides/tools/dumpcap.rst | 86 +++++++++++++++++
doc/guides/tools/index.rst | 1 +
10 files changed, 228 insertions(+), 87 deletions(-)
create mode 100644 doc/guides/prog_guide/pcapng_lib.rst
create mode 100644 doc/guides/tools/dumpcap.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 1992107a0356..ee07394d1c78 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -223,3 +223,4 @@ The public API headers are grouped by topics:
[experimental APIs] (@ref rte_compat.h),
[ABI versioning] (@ref rte_function_versioning.h),
[version] (@ref rte_version.h)
+ [pcapng] (@ref rte_pcapng.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 325a0195c6ab..aba17799a9a1 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -58,6 +58,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/metrics \
@TOPDIR@/lib/node \
@TOPDIR@/lib/net \
+ @TOPDIR@/lib/pcapng \
@TOPDIR@/lib/pci \
@TOPDIR@/lib/pdump \
@TOPDIR@/lib/pipeline \
diff --git a/doc/guides/howto/img/packet_capture_framework.svg b/doc/guides/howto/img/packet_capture_framework.svg
index a76baf71fdee..1c2646a81096 100644
--- a/doc/guides/howto/img/packet_capture_framework.svg
+++ b/doc/guides/howto/img/packet_capture_framework.svg
@@ -1,6 +1,4 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
<svg
xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
xmlns:dc="http://purl.org/dc/elements/1.1/"
@@ -16,8 +14,8 @@
viewBox="0 0 425.19685 283.46457"
id="svg2"
version="1.1"
- inkscape:version="0.91 r13725"
- sodipodi:docname="drawing-pcap.svg">
+ inkscape:version="1.0.2 (e86c870879, 2021-01-15)"
+ sodipodi:docname="packet_capture_framework.svg">
<defs
id="defs4">
<marker
@@ -228,7 +226,7 @@
x2="487.64606"
y2="258.38232"
gradientUnits="userSpaceOnUse"
- gradientTransform="translate(-84.916417,744.90779)" />
+ gradientTransform="matrix(1.1457977,0,0,0.99944907,-151.97019,745.05014)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient5784"
@@ -277,17 +275,18 @@
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
- inkscape:zoom="0.57434918"
- inkscape:cx="215.17857"
- inkscape:cy="285.26445"
+ inkscape:zoom="1"
+ inkscape:cx="226.77165"
+ inkscape:cy="78.124511"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
- inkscape:window-width="1874"
- inkscape:window-height="971"
- inkscape:window-x="2"
- inkscape:window-y="24"
- inkscape:window-maximized="0" />
+ inkscape:window-width="2560"
+ inkscape:window-height="1414"
+ inkscape:window-x="0"
+ inkscape:window-y="0"
+ inkscape:window-maximized="1"
+ inkscape:document-rotation="0" />
<metadata
id="metadata7">
<rdf:RDF>
@@ -296,7 +295,7 @@
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
- <dc:title></dc:title>
+ <dc:title />
</cc:Work>
</rdf:RDF>
</metadata>
@@ -321,15 +320,15 @@
y="790.82452" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="61.050636"
y="807.3205"
- id="text4152"
- sodipodi:linespacing="125%"><tspan
+ id="text4152"><tspan
sodipodi:role="line"
id="tspan4154"
x="61.050636"
- y="807.3205">DPDK Primary Application</tspan></text>
+ y="807.3205"
+ style="font-size:12.5px;line-height:1.25">DPDK Primary Application</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6"
@@ -339,19 +338,20 @@
y="827.01843" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="350.68585"
y="841.16058"
- id="text4189"
- sodipodi:linespacing="125%"><tspan
+ id="text4189"><tspan
sodipodi:role="line"
id="tspan4191"
x="350.68585"
- y="841.16058">dpdk-pdump</tspan><tspan
+ y="841.16058"
+ style="font-size:12.5px;line-height:1.25">dpdk-dumpcap</tspan><tspan
sodipodi:role="line"
x="350.68585"
y="856.78558"
- id="tspan4193">tool</tspan></text>
+ id="tspan4193"
+ style="font-size:12.5px;line-height:1.25">tool</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4"
@@ -361,15 +361,15 @@
y="891.16315" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="352.70612"
y="905.3053"
- id="text4189-1"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1"><tspan
sodipodi:role="line"
x="352.70612"
y="905.3053"
- id="tspan4193-3">PCAP PMD</tspan></text>
+ id="tspan4193-3"
+ style="font-size:12.5px;line-height:1.25">librte_pcapng</tspan></text>
<rect
style="fill:url(#linearGradient5745);fill-opacity:1;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-6"
@@ -379,15 +379,15 @@
y="923.9931" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="136.02846"
y="938.13525"
- id="text4189-0"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-0"><tspan
sodipodi:role="line"
x="136.02846"
y="938.13525"
- id="tspan4193-6">dpdk_port0</tspan></text>
+ id="tspan4193-6"
+ style="font-size:12.5px;line-height:1.25">dpdk_port0</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-5"
@@ -397,33 +397,33 @@
y="824.99817" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="137.54369"
y="839.14026"
- id="text4189-4"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-4"><tspan
sodipodi:role="line"
x="137.54369"
y="839.14026"
- id="tspan4193-2">librte_pdump</tspan></text>
+ id="tspan4193-2"
+ style="font-size:12.5px;line-height:1.25">librte_pdump</tspan></text>
<rect
- style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+ style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1.07013;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4-5"
- width="94.449265"
- height="35.355339"
- x="307.7804"
- y="985.61243" />
+ width="108.21974"
+ height="35.335861"
+ x="297.9809"
+ y="985.62219" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="352.70618"
y="999.75458"
- id="text4189-1-8"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1-8"><tspan
sodipodi:role="line"
x="352.70618"
y="999.75458"
- id="tspan4193-3-2">capture.pcap</tspan></text>
+ id="tspan4193-3-2"
+ style="font-size:12.5px;line-height:1.25">capture.pcapng</tspan></text>
<rect
style="fill:url(#linearGradient5788-1);fill-opacity:1;stroke:#257cdc;stroke-width:1.12555885;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4-5-1"
@@ -433,15 +433,15 @@
y="983.14984" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="136.53352"
y="1002.785"
- id="text4189-1-8-4"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1-8-4"><tspan
sodipodi:role="line"
x="136.53352"
y="1002.785"
- id="tspan4193-3-2-7">Traffic Generator</tspan></text>
+ id="tspan4193-3-2-7"
+ style="font-size:12.5px;line-height:1.25">Traffic Generator</tspan></text>
<path
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker7331)"
d="m 351.46948,927.02357 c 0,57.5787 0,57.5787 0,57.5787"
diff --git a/doc/guides/howto/packet_capture_framework.rst b/doc/guides/howto/packet_capture_framework.rst
index c31bac52340e..78baa609a021 100644
--- a/doc/guides/howto/packet_capture_framework.rst
+++ b/doc/guides/howto/packet_capture_framework.rst
@@ -1,18 +1,19 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright(c) 2017 Intel Corporation.
-DPDK pdump Library and pdump Tool
-=================================
+DPDK packet capture libraries and tools
+=======================================
This document describes how the Data Plane Development Kit (DPDK) Packet
Capture Framework is used for capturing packets on DPDK ports. It is intended
for users of DPDK who want to know more about the Packet Capture feature and
for those who want to monitor traffic on DPDK-controlled devices.
-The DPDK packet capture framework was introduced in DPDK v16.07. The DPDK
-packet capture framework consists of the DPDK pdump library and DPDK pdump
-tool.
-
+The DPDK packet capture framework was introduced in DPDK v16.07 and
+enhanced in 21.1. The DPDK packet capture framework consists of the
+libraries for collecting packets ``librte_pdump`` and writing packets
+to a file ``librte_pcapng``. There are two sample applications:
+``dpdk-dumpcap`` and older ``dpdk-pdump``.
Introduction
------------
@@ -22,43 +23,46 @@ allow users to initialize the packet capture framework and to enable or
disable packet capture. The library works on a multi process communication model and its
usage is recommended for debugging purposes.
-The :ref:`dpdk-pdump <pdump_tool>` tool is developed based on the
-``librte_pdump`` library. It runs as a DPDK secondary process and is capable
-of enabling or disabling packet capture on DPDK ports. The ``dpdk-pdump`` tool
-provides command-line options with which users can request enabling or
-disabling of the packet capture on DPDK ports.
+The :ref:`librte_pcapng <pcapng_library>` library provides the APIs to format
+packets and write them to a file in Pcapng format.
+
+
+The :ref:`dpdk-dumpcap <dumpcap_tool>` is a tool that captures packets in
+like Wireshark dumpcap does for Linux. It runs as a DPDK secondary process and
+captures packets from one or more interfaces and writes them to a file
+in Pcapng format. The ``dpdk-dumpcap`` tool is designed to take
+most of the same options as the Wireshark ``dumpcap`` command.
-The application which initializes the packet capture framework will be a primary process
-and the application that enables or disables the packet capture will
-be a secondary process. The primary process sends the Rx and Tx packets from the DPDK ports
-to the secondary process.
+Without any options it will use the packet capture framework to
+capture traffic from the first available DPDK port.
In DPDK the ``testpmd`` application can be used to initialize the packet
-capture framework and acts as a server, and the ``dpdk-pdump`` tool acts as a
+capture framework and acts as a server, and the ``dpdk-dumpcap`` tool acts as a
client. To view Rx or Tx packets of ``testpmd``, the application should be
-launched first, and then the ``dpdk-pdump`` tool. Packets from ``testpmd``
-will be sent to the tool, which then sends them on to the Pcap PMD device and
-that device writes them to the Pcap file or to an external interface depending
-on the command-line option used.
+launched first, and then the ``dpdk-dumpcap`` tool. Packets from ``testpmd``
+will be sent to the tool, and then to the Pcapng file.
Some things to note:
-* The ``dpdk-pdump`` tool can only be used in conjunction with a primary
+* All tools using ``librte_pdump`` can only be used in conjunction with a primary
application which has the packet capture framework initialized already. In
dpdk, only ``testpmd`` is modified to initialize packet capture framework,
- other applications remain untouched. So, if the ``dpdk-pdump`` tool has to
+ other applications remain untouched. So, if the ``dpdk-dumpcap`` tool has to
be used with any application other than the testpmd, the user needs to
explicitly modify that application to call the packet capture framework
initialization code. Refer to the ``app/test-pmd/testpmd.c`` code and look
for ``pdump`` keyword to see how this is done.
-* The ``dpdk-pdump`` tool depends on the libpcap based PMD.
+* The ``dpdk-pdump`` tool is an older tool created as demonstration of ``librte_pdump``
+ library. The ``dpdk-pdump`` tool provides more limited functionality and
+ and depends on the Pcap PMD. It is retained only for compatibility reasons;
+ users should use ``dpdk-dumpcap`` instead.
Test Environment
----------------
-The overview of using the Packet Capture Framework and the ``dpdk-pdump`` tool
+The overview of using the Packet Capture Framework and the ``dpdk-dumpcap`` utility
for packet capturing on the DPDK port in
:numref:`figure_packet_capture_framework`.
@@ -66,13 +70,13 @@ for packet capturing on the DPDK port in
.. figure:: img/packet_capture_framework.*
- Packet capturing on a DPDK port using the dpdk-pdump tool.
+ Packet capturing on a DPDK port using the dpdk-dumpcap utility.
Running the Application
-----------------------
-The following steps demonstrate how to run the ``dpdk-pdump`` tool to capture
+The following steps demonstrate how to run the ``dpdk-dumpcap`` tool to capture
Rx side packets on dpdk_port0 in :numref:`figure_packet_capture_framework` and
inspect them using ``tcpdump``.
@@ -80,16 +84,15 @@ inspect them using ``tcpdump``.
sudo <build_dir>/app/dpdk-testpmd -c 0xf0 -n 4 -- -i --port-topology=chained
-#. Launch the pdump tool as follows::
+#. Launch the dpdk-dump as follows::
- sudo <build_dir>/app/dpdk-pdump -- \
- --pdump 'port=0,queue=*,rx-dev=/tmp/capture.pcap'
+ sudo <build_dir>/app/dpdk-dumpcap -w /tmp/capture.pcapng
#. Send traffic to dpdk_port0 from traffic generator.
- Inspect packets captured in the file capture.pcap using a tool
- that can interpret Pcap files, for example tcpdump::
+ Inspect packets captured in the file capture.pcap using a tool such as
+ tcpdump or tshark that can interpret Pcapng files::
- $tcpdump -nr /tmp/capture.pcap
+ $ tcpdump -nr /tmp/capture.pcapng
reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)
11:11:36.891404 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
11:11:36.891442 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 2dce507f46a3..b440c77c2ba1 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -43,6 +43,7 @@ Programmer's Guide
ip_fragment_reassembly_lib
generic_receive_offload_lib
generic_segmentation_offload_lib
+ pcapng_lib
pdump_lib
multi_proc_support
kernel_nic_interface
diff --git a/doc/guides/prog_guide/pcapng_lib.rst b/doc/guides/prog_guide/pcapng_lib.rst
new file mode 100644
index 000000000000..36379b530a57
--- /dev/null
+++ b/doc/guides/prog_guide/pcapng_lib.rst
@@ -0,0 +1,24 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2016 Intel Corporation.
+
+.. _pcapng_library:
+
+Packet Capture File Writer
+==========================
+
+Pcapng is a library for creating files in Pcapng file format.
+The Pcapng file format is the default capture file format for modern
+network capture processing tools. It can be read by wireshark and tcpdump.
+
+Usage
+-----
+
+Before the library can be used the function ``rte_pcapng_init``
+should be called once to initialize timestamp computation.
+
+
+References
+----------
+* Draft RFC https://www.ietf.org/id/draft-tuexen-opsawg-pcapng-03.html
+
+* Project repository https://github.com/pcapng/pcapng/
diff --git a/doc/guides/prog_guide/pdump_lib.rst b/doc/guides/prog_guide/pdump_lib.rst
index 62c0b015b2fe..9af91415e5ea 100644
--- a/doc/guides/prog_guide/pdump_lib.rst
+++ b/doc/guides/prog_guide/pdump_lib.rst
@@ -3,10 +3,10 @@
.. _pdump_library:
-The librte_pdump Library
-========================
+The Packet Capture Library
+==========================
-The ``librte_pdump`` library provides a framework for packet capturing in DPDK.
+The DPDK ``pdump`` library provides a framework for packet capturing in DPDK.
The library does the complete copy of the Rx and Tx mbufs to a new mempool and
hence it slows down the performance of the applications, so it is recommended
to use this library for debugging purposes.
@@ -23,11 +23,19 @@ or disable the packet capture, and to uninitialize it.
* ``rte_pdump_enable()``:
This API enables the packet capture on a given port and queue.
- Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf()``
+ This API enables the packet capture on a given port and queue.
+ It also allows setting an optional filter using DPDK BPF interpreter and
+ setting the captured packet length.
* ``rte_pdump_enable_by_deviceid()``:
This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
- Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf_by_deviceid()``
+ This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
+ It also allows seating an optional filter using DPDK BPF interpreter and
+ setting the captured packet length.
* ``rte_pdump_disable()``:
This API disables the packet capture on a given port and queue.
@@ -61,6 +69,12 @@ and enables the packet capture by registering the Ethernet RX and TX callbacks f
and queue combinations. Then the primary process will mirror the packets to the new mempool and enqueue them to
the rte_ring that secondary process have passed to these APIs.
+The packet ring supports one of two formats. The default format enqueues copies of the original packets
+into the rte_ring. If the ``RTE_PDUMP_FLAG_PCAPNG`` is set the mbuf data is extended with header and trailer
+to match the format of Pcapng enhanced packet block. The enhanced packet block has meta-data such as the
+timestamp, port and queue the packet was captured on. It is up to the application consuming the
+packets from the ring to select the format desired.
+
The library APIs ``rte_pdump_disable()`` and ``rte_pdump_disable_by_deviceid()`` disables the packet capture.
For the calls to these APIs from secondary process, the library creates the "pdump disable" request and sends
the request to the primary process over the multi process channel. The primary process takes this request and
@@ -74,5 +88,5 @@ function.
Use Case: Packet Capturing
--------------------------
-The DPDK ``app/pdump`` tool is developed based on this library to capture packets in DPDK.
-Users can use this as an example to develop their own packet capturing tools.
+The DPDK ``app/dpdk-dumpcap`` utility uses this library
+to capture packets in DPDK.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 675b5738348b..ee24cbfdb99d 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -62,6 +62,16 @@ New Features
* Added bus-level parsing of the devargs syntax.
* Kept compatibility with the legacy syntax as parsing fallback.
+* **Enhance Packet capture.**
+
+ * New dpdk-dumpcap program that has most of the features of the
+ wireshark dumpcap utility including capture of multiple interfaces,
+ stopping after number of bytes, packets.
+ * New library for writing pcapng packet capture files.
+ * Enhancement to the pdump library to support:
+ * Packet filter with BPF.
+ * Pcapng format with timestamps and meta-data.
+ * Fixes packet capture with stripped VLAN tags.
Removed Items
-------------
diff --git a/doc/guides/tools/dumpcap.rst b/doc/guides/tools/dumpcap.rst
new file mode 100644
index 000000000000..664ea0c79802
--- /dev/null
+++ b/doc/guides/tools/dumpcap.rst
@@ -0,0 +1,86 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2020 Microsoft Corporation.
+
+.. _dumpcap_tool:
+
+dpdk-dumpcap Application
+========================
+
+The ``dpdk-dumpcap`` tool is a Data Plane Development Kit (DPDK)
+network traffic dump tool. The interface is similar to the dumpcap tool in Wireshark.
+It runs as a secondary DPDK process and lets you capture packets that are
+coming into and out of a DPDK primary process.
+The ``dpdk-dumpcap`` writes files in Pcapng packet format using
+capture file format is pcapng.
+
+Without any options set it will use DPDK to capture traffic from the first
+available DPDK interface and write the received raw packet data, along
+with timestamps into a pcapng file.
+
+If the ``-w`` option is not specified, ``dpdk-dumpcap`` writes to a newly
+create file with a name chosen based on interface name and timestamp.
+If ``-w`` option is specified, then that file is used.
+
+ .. Note::
+ * The ``dpdk-dumpcap`` tool can only be used in conjunction with a primary
+ application which has the packet capture framework initialized already.
+ In dpdk, only the ``testpmd`` is modified to initialize packet capture
+ framework, other applications remain untouched. So, if the ``dpdk-dumpcap``
+ tool has to be used with any application other than the testpmd, user
+ needs to explicitly modify that application to call packet capture
+ framework initialization code. Refer ``app/test-pmd/testpmd.c``
+ code to see how this is done.
+
+ * The ``dpdk-dumpcap`` tool runs as a DPDK secondary process. It exits when
+ the primary application exits.
+
+
+Running the Application
+-----------------------
+
+To list interfaces available for capture use ``--list-interfaces``.
+
+To filter packets in style of *tshark* use the ``-f`` flag.
+
+To capture on multiple interfaces at once, use multiple ``-I`` flags.
+
+Example
+-------
+
+.. code-block:: console
+
+ # ./<build_dir>/app/dpdk-dumpcap --list-interfaces
+ 0. 000:00:03.0
+ 1. 000:00:03.1
+
+ # ./<build_dir>/app/dpdk-dumpcap -I 0000:00:03.0 -c 6 -w /tmp/sample.pcapng
+ Packets captured: 6
+ Packets received/dropped on interface '0000:00:03.0' 6/0
+
+ # ./<build_dir>/app/dpdk-dumpcap -f 'tcp port 80'
+ Packets captured: 6
+ Packets received/dropped on interface '0000:00:03.0' 10/8
+
+
+Limitations
+-----------
+The following option of Wireshark ``dumpcap`` is not yet implemented:
+
+ * ``-b|--ring-buffer`` -- more complex file management.
+
+The following options do not make sense in the context of DPDK.
+
+ * ``-C <byte_limit>`` -- its a kernel thing
+
+ * ``-t`` -- use a thread per interface
+
+ * Timestamp type.
+
+ * Link data types. Only EN10MB (Ethernet) is supported.
+
+ * Wireless related options: ``-I|--monitor-mode`` and ``-k <freq>``
+
+
+.. Note::
+ * The options to ``dpdk-dumpcap`` are like the Wireshark dumpcap program and
+ are not the same as ``dpdk-pdump`` and other DPDK applications.
diff --git a/doc/guides/tools/index.rst b/doc/guides/tools/index.rst
index 93dde4148e90..b71c12b8f2dd 100644
--- a/doc/guides/tools/index.rst
+++ b/doc/guides/tools/index.rst
@@ -8,6 +8,7 @@ DPDK Tools User Guides
:maxdepth: 2
:numbered:
+ dumpcap
proc_info
pdump
pmdinfo
--
2.30.2
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v3 5/8] pdump: support pcapng and filtering
@ 2021-09-08 4:50 1% ` Stephen Hemminger
2021-09-08 4:50 1% ` [dpdk-dev] [PATCH v3 7/8] doc: changes for new pcapng and dumpcap Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-08 4:50 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
This enhances the DPDK pdump library to support new
pcapng format and filtering via BPF.
The internal client/server protocol is changed to support
two versions: the original pdump basic version and a
new pcapng version.
The internal version number (not part of exposed API or ABI)
is intentionally increased to cause any attempt to try
mismatched primary/secondary process to fail.
Add new API to do allow filtering of captured packets with
DPDK BPF (eBPF) filter program. It keeps statistics
on packets captured, filtered, and missed (because ring was full).
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
lib/meson.build | 4 +-
lib/pdump/meson.build | 2 +-
lib/pdump/rte_pdump.c | 437 ++++++++++++++++++++++++++++++------------
lib/pdump/rte_pdump.h | 110 ++++++++++-
lib/pdump/version.map | 8 +
5 files changed, 435 insertions(+), 126 deletions(-)
diff --git a/lib/meson.build b/lib/meson.build
index 51bf9c2d11f0..4b01949fe715 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -26,6 +26,7 @@ libraries = [
'timer', # eventdev depends on this
'acl',
'bbdev',
+ 'bpf',
'bitratestats',
'cfgfile',
'compressdev',
@@ -43,7 +44,6 @@ libraries = [
'member',
'pcapng',
'power',
- 'pdump',
'rawdev',
'regexdev',
'rib',
@@ -55,10 +55,10 @@ libraries = [
'ipsec', # ipsec lib depends on net, crypto and security
'fib', #fib lib depends on rib
'port', # pkt framework libs which use other libs from above
+ 'pdump', # pdump lib depends on bpf pcapng
'table',
'pipeline',
'flow_classify', # flow_classify lib depends on pkt framework table lib
- 'bpf',
'graph',
'node',
]
diff --git a/lib/pdump/meson.build b/lib/pdump/meson.build
index 3a95eabde6a6..51ceb2afdec5 100644
--- a/lib/pdump/meson.build
+++ b/lib/pdump/meson.build
@@ -3,4 +3,4 @@
sources = files('rte_pdump.c')
headers = files('rte_pdump.h')
-deps += ['ethdev']
+deps += ['ethdev', 'bpf', 'pcapng']
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 382217bc1564..9cb31d6b5008 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -7,8 +7,10 @@
#include <rte_ethdev.h>
#include <rte_lcore.h>
#include <rte_log.h>
+#include <rte_memzone.h>
#include <rte_errno.h>
#include <rte_string_fns.h>
+#include <rte_pcapng.h>
#include "rte_pdump.h"
@@ -27,30 +29,26 @@ enum pdump_operation {
ENABLE = 2
};
+/*
+ * Note: version numbers intentionally start at 3
+ * in order to catch any application built with older out
+ * version of DPDK using incompatible client request format.
+ */
enum pdump_version {
- V1 = 1
+ PDUMP_CLIENT_LEGACY = 3,
+ PDUMP_CLIENT_PCAPNG = 4,
};
struct pdump_request {
uint16_t ver;
uint16_t op;
- uint32_t flags;
- union pdump_data {
- struct enable_v1 {
- char device[RTE_DEV_NAME_MAX_LEN];
- uint16_t queue;
- struct rte_ring *ring;
- struct rte_mempool *mp;
- void *filter;
- } en_v1;
- struct disable_v1 {
- char device[RTE_DEV_NAME_MAX_LEN];
- uint16_t queue;
- struct rte_ring *ring;
- struct rte_mempool *mp;
- void *filter;
- } dis_v1;
- } data;
+ uint16_t flags;
+ uint16_t queue;
+ struct rte_ring *ring;
+ struct rte_mempool *mp;
+ const struct rte_bpf_prm *prm;
+ uint32_t snaplen;
+ char device[RTE_DEV_NAME_MAX_LEN];
};
struct pdump_response {
@@ -63,80 +61,140 @@ static struct pdump_rxtx_cbs {
struct rte_ring *ring;
struct rte_mempool *mp;
const struct rte_eth_rxtx_callback *cb;
- void *filter;
+ const struct rte_bpf *filter;
+ enum pdump_version ver;
+ uint32_t snaplen;
} rx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
tx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
-
-static inline void
-pdump_copy(struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
+static const char *MZ_RTE_PDUMP_STATS = "rte_pdump_stats";
+
+/* Shared memory between primary and secondary processes. */
+static struct {
+ struct rte_pdump_stats rx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+ struct rte_pdump_stats tx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+} *pdump_stats;
+
+/* Create a clone of mbuf to be placed into ring. */
+static void
+pdump_copy(uint16_t port_id, uint16_t queue,
+ enum rte_pcapng_direction direction,
+ struct rte_mbuf **pkts, uint16_t nb_pkts,
+ const struct pdump_rxtx_cbs *cbs,
+ struct rte_pdump_stats *stats)
{
unsigned int i;
int ring_enq;
uint16_t d_pkts = 0;
struct rte_mbuf *dup_bufs[nb_pkts];
- struct pdump_rxtx_cbs *cbs;
+ uint64_t ts;
struct rte_ring *ring;
struct rte_mempool *mp;
struct rte_mbuf *p;
+ uint64_t rcs[nb_pkts];
+
+ if (cbs->filter &&
+ rte_bpf_exec_burst(cbs->filter, (void **)pkts, rcs, nb_pkts) == 0) {
+ /* All packets were filtered out */
+ __atomic_fetch_add(&stats->filtered, nb_pkts,
+ __ATOMIC_RELAXED);
+ return;
+ }
- cbs = user_params;
+ ts = rte_get_tsc_cycles();
ring = cbs->ring;
mp = cbs->mp;
for (i = 0; i < nb_pkts; i++) {
- p = rte_pktmbuf_copy(pkts[i], mp, 0, UINT32_MAX);
- if (p)
+ /*
+ * Similar behavior to rte_bpf_eth callback.
+ * if BPF program returns zero value for a given packet,
+ * then it will be ignored.
+ */
+ if (cbs->filter && rcs[i] == 0) {
+ __atomic_fetch_add(&stats->filtered,
+ 1, __ATOMIC_RELAXED);
+ continue;
+ }
+
+ /*
+ * If using pcapng then want to wrap packets
+ * otherwise a simple copy.
+ */
+ if (cbs->ver == PDUMP_CLIENT_PCAPNG)
+ p = rte_pcapng_copy(port_id, queue,
+ pkts[i], mp, cbs->snaplen,
+ ts, direction);
+ else
+ p = rte_pktmbuf_copy(pkts[i], mp, 0, cbs->snaplen);
+
+ if (unlikely(p == NULL))
+ __atomic_fetch_add(&stats->nombuf, 1, __ATOMIC_RELAXED);
+ else
dup_bufs[d_pkts++] = p;
}
+ __atomic_fetch_add(&stats->accepted, d_pkts, __ATOMIC_RELAXED);
+
ring_enq = rte_ring_enqueue_burst(ring, (void *)dup_bufs, d_pkts, NULL);
if (unlikely(ring_enq < d_pkts)) {
unsigned int drops = d_pkts - ring_enq;
- PDUMP_LOG(DEBUG,
- "only %d of packets enqueued to ring\n", ring_enq);
+ __atomic_fetch_add(&stats->ringfull, drops, __ATOMIC_RELAXED);
rte_pktmbuf_free_bulk(&dup_bufs[ring_enq], drops);
}
}
static uint16_t
-pdump_rx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_rx(uint16_t port, uint16_t queue,
struct rte_mbuf **pkts, uint16_t nb_pkts,
- uint16_t max_pkts __rte_unused,
- void *user_params)
+ uint16_t max_pkts __rte_unused, void *user_params)
{
- pdump_copy(pkts, nb_pkts, user_params);
+ const struct pdump_rxtx_cbs *cbs = user_params;
+ struct rte_pdump_stats *stats = &pdump_stats->rx[port][queue];
+
+ pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_IN,
+ pkts, nb_pkts, cbs, stats);
return nb_pkts;
}
static uint16_t
-pdump_tx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_tx(uint16_t port, uint16_t queue,
struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
{
- pdump_copy(pkts, nb_pkts, user_params);
+ const struct pdump_rxtx_cbs *cbs = user_params;
+ struct rte_pdump_stats *stats = &pdump_stats->tx[port][queue];
+
+ pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_OUT,
+ pkts, nb_pkts, cbs, stats);
return nb_pkts;
}
static int
-pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
- struct rte_ring *ring, struct rte_mempool *mp,
- uint16_t operation)
+pdump_register_rx_callbacks(enum pdump_version ver,
+ uint16_t end_q, uint16_t port, uint16_t queue,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ struct rte_bpf *filter,
+ uint16_t operation, uint32_t snaplen)
{
uint16_t qid;
- struct pdump_rxtx_cbs *cbs = NULL;
qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
for (; qid < end_q; qid++) {
- cbs = &rx_cbs[port][qid];
- if (cbs && operation == ENABLE) {
+ struct pdump_rxtx_cbs *cbs = &rx_cbs[port][qid];
+
+ if (operation == ENABLE) {
if (cbs->cb) {
PDUMP_LOG(ERR,
"rx callback for port=%d queue=%d, already exists\n",
port, qid);
return -EEXIST;
}
+ cbs->ver = ver;
cbs->ring = ring;
cbs->mp = mp;
+ cbs->snaplen = snaplen;
+ cbs->filter = filter;
+
cbs->cb = rte_eth_add_first_rx_callback(port, qid,
pdump_rx, cbs);
if (cbs->cb == NULL) {
@@ -145,8 +203,7 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
rte_errno);
return rte_errno;
}
- }
- if (cbs && operation == DISABLE) {
+ } else if (operation == DISABLE) {
int ret;
if (cbs->cb == NULL) {
@@ -170,26 +227,32 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
}
static int
-pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
- struct rte_ring *ring, struct rte_mempool *mp,
- uint16_t operation)
+pdump_register_tx_callbacks(enum pdump_version ver,
+ uint16_t end_q, uint16_t port, uint16_t queue,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ struct rte_bpf *filter,
+ uint16_t operation, uint32_t snaplen)
{
uint16_t qid;
- struct pdump_rxtx_cbs *cbs = NULL;
qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
for (; qid < end_q; qid++) {
- cbs = &tx_cbs[port][qid];
- if (cbs && operation == ENABLE) {
+ struct pdump_rxtx_cbs *cbs = &tx_cbs[port][qid];
+
+ if (operation == ENABLE) {
if (cbs->cb) {
PDUMP_LOG(ERR,
"tx callback for port=%d queue=%d, already exists\n",
port, qid);
return -EEXIST;
}
+ cbs->ver = ver;
cbs->ring = ring;
cbs->mp = mp;
+ cbs->snaplen = snaplen;
+ cbs->filter = filter;
+
cbs->cb = rte_eth_add_tx_callback(port, qid, pdump_tx,
cbs);
if (cbs->cb == NULL) {
@@ -198,8 +261,7 @@ pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
rte_errno);
return rte_errno;
}
- }
- if (cbs && operation == DISABLE) {
+ } else if (operation == DISABLE) {
int ret;
if (cbs->cb == NULL) {
@@ -228,37 +290,47 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
uint16_t nb_rx_q = 0, nb_tx_q = 0, end_q, queue;
uint16_t port;
int ret = 0;
+ struct rte_bpf *filter = NULL;
uint32_t flags;
uint16_t operation;
struct rte_ring *ring;
struct rte_mempool *mp;
- flags = p->flags;
- operation = p->op;
- if (operation == ENABLE) {
- ret = rte_eth_dev_get_port_by_name(p->data.en_v1.device,
- &port);
- if (ret < 0) {
+ if (!(p->ver == PDUMP_CLIENT_LEGACY ||
+ p->ver == PDUMP_CLIENT_PCAPNG)) {
+ PDUMP_LOG(ERR,
+ "incorrect client version %u\n", p->ver);
+ return -EINVAL;
+ }
+
+ if (p->prm) {
+ if (p->prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF) {
PDUMP_LOG(ERR,
- "failed to get port id for device id=%s\n",
- p->data.en_v1.device);
+ "invalid BPF program type: %u\n",
+ p->prm->prog_arg.type);
return -EINVAL;
}
- queue = p->data.en_v1.queue;
- ring = p->data.en_v1.ring;
- mp = p->data.en_v1.mp;
- } else {
- ret = rte_eth_dev_get_port_by_name(p->data.dis_v1.device,
- &port);
- if (ret < 0) {
- PDUMP_LOG(ERR,
- "failed to get port id for device id=%s\n",
- p->data.dis_v1.device);
- return -EINVAL;
+
+ filter = rte_bpf_load(p->prm);
+ if (filter == NULL) {
+ PDUMP_LOG(ERR, "cannot load BPF filter: %s\n",
+ rte_strerror(rte_errno));
+ return -rte_errno;
}
- queue = p->data.dis_v1.queue;
- ring = p->data.dis_v1.ring;
- mp = p->data.dis_v1.mp;
+ }
+
+ flags = p->flags;
+ operation = p->op;
+ queue = p->queue;
+ ring = p->ring;
+ mp = p->mp;
+
+ ret = rte_eth_dev_get_port_by_name(p->device, &port);
+ if (ret < 0) {
+ PDUMP_LOG(ERR,
+ "failed to get port id for device id=%s\n",
+ p->device);
+ return -EINVAL;
}
/* validation if packet capture is for all queues */
@@ -296,8 +368,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
/* register RX callback */
if (flags & RTE_PDUMP_FLAG_RX) {
end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_rx_q : queue + 1;
- ret = pdump_register_rx_callbacks(end_q, port, queue, ring, mp,
- operation);
+ ret = pdump_register_rx_callbacks(p->ver, end_q, port, queue,
+ ring, mp, filter,
+ operation, p->snaplen);
if (ret < 0)
return ret;
}
@@ -305,8 +378,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
/* register TX callback */
if (flags & RTE_PDUMP_FLAG_TX) {
end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_tx_q : queue + 1;
- ret = pdump_register_tx_callbacks(end_q, port, queue, ring, mp,
- operation);
+ ret = pdump_register_tx_callbacks(p->ver, end_q, port, queue,
+ ring, mp, filter,
+ operation, p->snaplen);
if (ret < 0)
return ret;
}
@@ -347,8 +421,18 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
int
rte_pdump_init(void)
{
+ const struct rte_memzone *mz;
int ret;
+ mz = rte_memzone_reserve(MZ_RTE_PDUMP_STATS, sizeof(*pdump_stats),
+ rte_socket_id(), 0);
+ if (mz == NULL) {
+ PDUMP_LOG(ERR, "cannot allocate pdump statistics\n");
+ rte_errno = ENOMEM;
+ return -1;
+ }
+ pdump_stats = mz->addr;
+
ret = rte_mp_action_register(PDUMP_MP, pdump_server);
if (ret && rte_errno != ENOTSUP)
return -1;
@@ -392,14 +476,21 @@ pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp)
static int
pdump_validate_flags(uint32_t flags)
{
- if (flags != RTE_PDUMP_FLAG_RX && flags != RTE_PDUMP_FLAG_TX &&
- flags != RTE_PDUMP_FLAG_RXTX) {
+ if ((flags & RTE_PDUMP_FLAG_RXTX) == 0) {
PDUMP_LOG(ERR,
"invalid flags, should be either rx/tx/rxtx\n");
rte_errno = EINVAL;
return -1;
}
+ /* mask off the flags we know about */
+ if (flags & ~(RTE_PDUMP_FLAG_RXTX | RTE_PDUMP_FLAG_PCAPNG)) {
+ PDUMP_LOG(ERR,
+ "unknown flags: %#x\n", flags);
+ rte_errno = ENOTSUP;
+ return -1;
+ }
+
return 0;
}
@@ -426,12 +517,12 @@ pdump_validate_port(uint16_t port, char *name)
}
static int
-pdump_prepare_client_request(char *device, uint16_t queue,
- uint32_t flags,
- uint16_t operation,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+pdump_prepare_client_request(const char *device, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ uint16_t operation,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
{
int ret = -1;
struct rte_mp_msg mp_req, *mp_rep;
@@ -440,23 +531,23 @@ pdump_prepare_client_request(char *device, uint16_t queue,
struct pdump_request *req = (struct pdump_request *)mp_req.param;
struct pdump_response *resp;
- req->ver = 1;
- req->flags = flags;
+ memset(req, 0, sizeof(*req));
+ if (flags & RTE_PDUMP_FLAG_PCAPNG)
+ req->ver = PDUMP_CLIENT_PCAPNG;
+ else
+ req->ver = PDUMP_CLIENT_LEGACY;
+
+ req->flags = flags & RTE_PDUMP_FLAG_RXTX;
req->op = operation;
+ req->queue = queue;
+ strlcpy(req->device, device, sizeof(req->device));
+
if ((operation & ENABLE) != 0) {
- strlcpy(req->data.en_v1.device, device,
- sizeof(req->data.en_v1.device));
- req->data.en_v1.queue = queue;
- req->data.en_v1.ring = ring;
- req->data.en_v1.mp = mp;
- req->data.en_v1.filter = filter;
- } else {
- strlcpy(req->data.dis_v1.device, device,
- sizeof(req->data.dis_v1.device));
- req->data.dis_v1.queue = queue;
- req->data.dis_v1.ring = NULL;
- req->data.dis_v1.mp = NULL;
- req->data.dis_v1.filter = NULL;
+ req->queue = queue;
+ req->ring = ring;
+ req->mp = mp;
+ req->prm = prm;
+ req->snaplen = snaplen;
}
strlcpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
@@ -477,11 +568,17 @@ pdump_prepare_client_request(char *device, uint16_t queue,
return ret;
}
-int
-rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+/*
+ * There are two versions of this function, because although original API
+ * left place holder for future filter, it never checked the value.
+ * Therefore the API can't depend on application passing a non
+ * bogus value.
+ */
+static int
+pdump_enable(uint16_t port, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
{
int ret;
char name[RTE_DEV_NAME_MAX_LEN];
@@ -496,20 +593,42 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(name, queue, flags,
- ENABLE, ring, mp, filter);
+ if (snaplen == 0)
+ snaplen = UINT32_MAX;
- return ret;
+ return pdump_prepare_client_request(name, queue, flags, snaplen,
+ ENABLE, ring, mp, prm);
}
int
-rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
- uint32_t flags,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ void *filter __rte_unused)
{
- int ret = 0;
+ return pdump_enable(port, queue, flags, 0,
+ ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf(uint16_t port, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ return pdump_enable(port, queue, flags, snaplen,
+ ring, mp, prm);
+}
+
+static int
+pdump_enable_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ int ret;
ret = pdump_validate_ring_mp(ring, mp);
if (ret < 0)
@@ -518,10 +637,30 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(device_id, queue, flags,
- ENABLE, ring, mp, filter);
+ return pdump_prepare_client_request(device_id, queue, flags, snaplen,
+ ENABLE, ring, mp, prm);
+}
- return ret;
+int
+rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
+ uint32_t flags,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ void *filter __rte_unused)
+{
+ return pdump_enable_by_deviceid(device_id, queue, flags, 0,
+ ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm)
+{
+ return pdump_enable_by_deviceid(device_id, queue, flags, snaplen,
+ ring, mp, prm);
}
int
@@ -537,8 +676,8 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags)
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(name, queue, flags,
- DISABLE, NULL, NULL, NULL);
+ ret = pdump_prepare_client_request(name, queue, flags, 0,
+ DISABLE, NULL, NULL, NULL);
return ret;
}
@@ -553,8 +692,66 @@ rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(device_id, queue, flags,
- DISABLE, NULL, NULL, NULL);
+ ret = pdump_prepare_client_request(device_id, queue, flags, 0,
+ DISABLE, NULL, NULL, NULL);
return ret;
}
+
+static void
+pdump_sum_stats(uint16_t port, uint16_t nq,
+ const struct rte_pdump_stats stats[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
+ struct rte_pdump_stats *total)
+{
+ uint64_t *sum = (uint64_t *)total;
+ unsigned int i;
+ uint64_t val;
+ uint16_t qid;
+
+ for (qid = 0; qid < nq; qid++) {
+ const uint64_t *perq = (const uint64_t *)&stats[port][qid];
+
+ for (i = 0; i < sizeof(*total) / sizeof(uint64_t); i++) {
+ val = __atomic_load_n(&perq[i], __ATOMIC_RELAXED);
+ sum[i] += val;
+ }
+ }
+}
+
+int
+rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats)
+{
+ struct rte_eth_dev_info dev_info;
+ const struct rte_memzone *mz;
+ int ret;
+
+ memset(stats, 0, sizeof(*stats));
+ ret = rte_eth_dev_info_get(port, &dev_info);
+ if (ret != 0) {
+ PDUMP_LOG(ERR,
+ "Error during getting device (port %u) info: %s\n",
+ port, strerror(-ret));
+ return ret;
+ }
+
+ if (pdump_stats == NULL) {
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ PDUMP_LOG(ERR,
+ "pdump not initialized\n");
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ mz = rte_memzone_lookup(MZ_RTE_PDUMP_STATS);
+ if (mz == NULL) {
+ PDUMP_LOG(ERR, "can not find pdump stats\n");
+ rte_errno = EINVAL;
+ return -1;
+ }
+ pdump_stats = mz->addr;
+ }
+
+ pdump_sum_stats(port, dev_info.nb_rx_queues, pdump_stats->rx, stats);
+ pdump_sum_stats(port, dev_info.nb_tx_queues, pdump_stats->tx, stats);
+ return 0;
+}
diff --git a/lib/pdump/rte_pdump.h b/lib/pdump/rte_pdump.h
index 6b00fc17aeb2..e2fbd78c6273 100644
--- a/lib/pdump/rte_pdump.h
+++ b/lib/pdump/rte_pdump.h
@@ -15,6 +15,7 @@
#include <stdint.h>
#include <rte_mempool.h>
#include <rte_ring.h>
+#include <rte_bpf.h>
#ifdef __cplusplus
extern "C" {
@@ -26,7 +27,9 @@ enum {
RTE_PDUMP_FLAG_RX = 1, /* receive direction */
RTE_PDUMP_FLAG_TX = 2, /* transmit direction */
/* both receive and transmit directions */
- RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX)
+ RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX),
+
+ RTE_PDUMP_FLAG_PCAPNG = 4, /* format for pcapng */
};
/**
@@ -68,7 +71,7 @@ rte_pdump_uninit(void);
* @param mp
* mempool on to which original packets will be mirrored or duplicated.
* @param filter
- * place holder for packet filtering.
+ * Unused should be NULL.
*
* @return
* 0 on success, -1 on error, rte_errno is set accordingly.
@@ -80,6 +83,41 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
struct rte_mempool *mp,
void *filter);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given port and queue with filtering.
+ *
+ * @param port_id
+ * The Ethernet port on which packet capturing should be enabled.
+ * @param queue
+ * The queue on the Ethernet port which packet capturing
+ * should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ * queues of a given port.
+ * @param flags
+ * Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ * The upper limit on bytes to copy.
+ * Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ * The ring on which captured packets will be enqueued for user.
+ * @param mp
+ * The mempool on to which original packets will be mirrored or duplicated.
+ * @param prm
+ * Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ * 0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf(uint16_t port_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm);
+
/**
* Disables packet capturing on given port and queue.
*
@@ -118,7 +156,7 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags);
* @param mp
* mempool on to which original packets will be mirrored or duplicated.
* @param filter
- * place holder for packet filtering.
+ * unused should be NULL
*
* @return
* 0 on success, -1 on error, rte_errno is set accordingly.
@@ -131,6 +169,43 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
struct rte_mempool *mp,
void *filter);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given device id and queue with filtering.
+ * device_id can be name or pci address of device.
+ *
+ * @param device_id
+ * device id on which packet capturing should be enabled.
+ * @param queue
+ * The queue on the Ethernet port which packet capturing
+ * should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ * queues of a given port.
+ * @param flags
+ * Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ * The upper limit on bytes to copy.
+ * Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ * The ring on which captured packets will be enqueued for user.
+ * @param mp
+ * The mempool on to which original packets will be mirrored or duplicated.
+ * @param filter
+ * Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ * 0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf_prm *prm);
+
+
/**
* Disables packet capturing on given device_id and queue.
* device_id can be name or pci address of device.
@@ -153,6 +228,35 @@ int
rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
uint32_t flags);
+
+/**
+ * A structure used to retrieve statistics from packet capture.
+ * The statistics are sum of both receive and transmit queues.
+ */
+struct rte_pdump_stats {
+ uint64_t accepted; /**< Number of packets accepted by filter. */
+ uint64_t filtered; /**< Number of packets rejected by filter. */
+ uint64_t nombuf; /**< Number of mbuf allocation failures. */
+ uint64_t ringfull; /**< Number of missed packets due to ring full. */
+
+ uint64_t reserved[4]; /**< Reserved and pad to cache line */
+};
+
+/**
+ * Retrieve the packet capture statistics for a queue.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param stats
+ * A pointer to structure of type *rte_pdump_stats* to be filled in.
+ * @return
+ * Zero if successful. -1 on error and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_pdump_stats(uint16_t port_id, struct rte_pdump_stats *stats);
+
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/pdump/version.map b/lib/pdump/version.map
index f0a9d12c9a9e..ce5502d9cdf4 100644
--- a/lib/pdump/version.map
+++ b/lib/pdump/version.map
@@ -10,3 +10,11 @@ DPDK_22 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ rte_pdump_enable_bpf;
+ rte_pdump_enable_bpf_by_deviceid;
+ rte_pdump_stats;
+};
--
2.30.2
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH] net: promote make rarp packet API as stable
@ 2021-09-08 10:59 3% Xiao Wang
2021-09-08 4:52 0% ` Stephen Hemminger
2021-09-08 5:07 0% ` Xia, Chenbo
0 siblings, 2 replies; 200+ results
From: Xiao Wang @ 2021-09-08 10:59 UTC (permalink / raw)
To: olivier.matz; +Cc: dev, Xiao Wang
rte_net_make_rarp_packet was introduced in version v18.02, there was no
change in this public API since then, and it's still being used by vhost
lib and virtio driver, so promote it as stable ABI.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
---
lib/net/rte_arp.h | 4 ----
lib/net/version.map | 2 +-
2 files changed, 1 insertion(+), 5 deletions(-)
diff --git a/lib/net/rte_arp.h b/lib/net/rte_arp.h
index feb0eb3e49..076c8ab314 100644
--- a/lib/net/rte_arp.h
+++ b/lib/net/rte_arp.h
@@ -50,9 +50,6 @@ struct rte_arp_hdr {
} __rte_packed __rte_aligned(2);
/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* Make a RARP packet based on MAC addr.
*
* @param mpool
@@ -63,7 +60,6 @@ struct rte_arp_hdr {
* @return
* - RARP packet pointer on success, or NULL on error
*/
-__rte_experimental
struct rte_mbuf *
rte_net_make_rarp_packet(struct rte_mempool *mpool,
const struct rte_ether_addr *mac);
diff --git a/lib/net/version.map b/lib/net/version.map
index 355b7c25b4..7584018d58 100644
--- a/lib/net/version.map
+++ b/lib/net/version.map
@@ -6,6 +6,7 @@ DPDK_22 {
rte_net_crc_calc;
rte_net_crc_set_alg;
rte_net_get_ptype;
+ rte_net_make_rarp_packet;
local: *;
};
@@ -13,7 +14,6 @@ DPDK_22 {
EXPERIMENTAL {
global:
- rte_net_make_rarp_packet;
rte_net_skip_ip6_ext;
rte_ether_unformat_addr;
};
--
2.15.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v3 1/6] bbdev: add capability for CRC16 check
@ 2021-09-08 1:15 4% ` Nicolas Chautru
2021-09-12 12:35 0% ` Tom Rix
0 siblings, 1 reply; 200+ results
From: Nicolas Chautru @ 2021-09-08 1:15 UTC (permalink / raw)
To: dev, gakhil
Cc: thomas, trix, hemant.agrawal, mingshan.zhang, arun.joshi,
Nicolas Chautru
Adding a missing operation when CRC16
is being used for TB CRC check.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
app/test-bbdev/test_bbdev_vector.c | 2 ++
doc/guides/prog_guide/bbdev.rst | 3 +++
doc/guides/rel_notes/release_21_11.rst | 1 +
lib/bbdev/rte_bbdev_op.h | 34 ++++++++++++++++++----------------
4 files changed, 24 insertions(+), 16 deletions(-)
diff --git a/app/test-bbdev/test_bbdev_vector.c b/app/test-bbdev/test_bbdev_vector.c
index 614dbd1..8d796b1 100644
--- a/app/test-bbdev/test_bbdev_vector.c
+++ b/app/test-bbdev/test_bbdev_vector.c
@@ -167,6 +167,8 @@
*op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK;
else if (!strcmp(token, "RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP"))
*op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP;
+ else if (!strcmp(token, "RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK"))
+ *op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK;
else if (!strcmp(token, "RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS"))
*op_flag_value = RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS;
else if (!strcmp(token, "RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE"))
diff --git a/doc/guides/prog_guide/bbdev.rst b/doc/guides/prog_guide/bbdev.rst
index 9619280..8bd7cba 100644
--- a/doc/guides/prog_guide/bbdev.rst
+++ b/doc/guides/prog_guide/bbdev.rst
@@ -891,6 +891,9 @@ given below.
|RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP |
| Set to drop the last CRC bits decoding output |
+--------------------------------------------------------------------+
+|RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK |
+| Set for code block CRC-16 checking |
++--------------------------------------------------------------------+
|RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS |
| Set for bit-level de-interleaver bypass on input stream |
+--------------------------------------------------------------------+
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d707a55..69dd518 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -84,6 +84,7 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* bbdev: Added capability related to more comprehensive CRC options.
ABI Changes
-----------
diff --git a/lib/bbdev/rte_bbdev_op.h b/lib/bbdev/rte_bbdev_op.h
index f946842..7c44ddd 100644
--- a/lib/bbdev/rte_bbdev_op.h
+++ b/lib/bbdev/rte_bbdev_op.h
@@ -142,51 +142,53 @@ enum rte_bbdev_op_ldpcdec_flag_bitmasks {
RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK = (1ULL << 1),
/** Set to drop the last CRC bits decoding output */
RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP = (1ULL << 2),
+ /** Set for transport block CRC-16 checking */
+ RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK = (1ULL << 3),
/** Set for bit-level de-interleaver bypass on Rx stream. */
- RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS = (1ULL << 3),
+ RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS = (1ULL << 4),
/** Set for HARQ combined input stream enable. */
- RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE = (1ULL << 4),
+ RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE = (1ULL << 5),
/** Set for HARQ combined output stream enable. */
- RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE = (1ULL << 5),
+ RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE = (1ULL << 6),
/** Set for LDPC decoder bypass.
* RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE must be set.
*/
- RTE_BBDEV_LDPC_DECODE_BYPASS = (1ULL << 6),
+ RTE_BBDEV_LDPC_DECODE_BYPASS = (1ULL << 7),
/** Set for soft-output stream enable */
- RTE_BBDEV_LDPC_SOFT_OUT_ENABLE = (1ULL << 7),
+ RTE_BBDEV_LDPC_SOFT_OUT_ENABLE = (1ULL << 8),
/** Set for Rate-Matching bypass on soft-out stream. */
- RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS = (1ULL << 8),
+ RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS = (1ULL << 9),
/** Set for bit-level de-interleaver bypass on soft-output stream. */
- RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS = (1ULL << 9),
+ RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS = (1ULL << 10),
/** Set for iteration stopping on successful decode condition
* i.e. a successful syndrome check.
*/
- RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE = (1ULL << 10),
+ RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE = (1ULL << 11),
/** Set if a device supports decoder dequeue interrupts. */
- RTE_BBDEV_LDPC_DEC_INTERRUPTS = (1ULL << 11),
+ RTE_BBDEV_LDPC_DEC_INTERRUPTS = (1ULL << 12),
/** Set if a device supports scatter-gather functionality. */
- RTE_BBDEV_LDPC_DEC_SCATTER_GATHER = (1ULL << 12),
+ RTE_BBDEV_LDPC_DEC_SCATTER_GATHER = (1ULL << 13),
/** Set if a device supports input/output HARQ compression. */
- RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION = (1ULL << 13),
+ RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION = (1ULL << 14),
/** Set if a device supports input LLR compression. */
- RTE_BBDEV_LDPC_LLR_COMPRESSION = (1ULL << 14),
+ RTE_BBDEV_LDPC_LLR_COMPRESSION = (1ULL << 15),
/** Set if a device supports HARQ input from
* device's internal memory.
*/
- RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE = (1ULL << 15),
+ RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE = (1ULL << 16),
/** Set if a device supports HARQ output to
* device's internal memory.
*/
- RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE = (1ULL << 16),
+ RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE = (1ULL << 17),
/** Set if a device supports loop-back access to
* HARQ internal memory. Intended for troubleshooting.
*/
- RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK = (1ULL << 17),
+ RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK = (1ULL << 18),
/** Set if a device includes LLR filler bits in the circular buffer
* for HARQ memory. If not set, it is assumed the filler bits are not
* in HARQ memory and handled directly by the LDPC decoder.
*/
- RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS = (1ULL << 18)
+ RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS = (1ULL << 19)
};
/** Flags for LDPC encoder operation and capability structure */
--
1.8.3.1
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [EXT] Re: [RFC 11/15] eventdev: reserve fields in timer object
2021-09-01 6:48 0% ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
@ 2021-09-07 21:02 0% ` Carrillo, Erik G
0 siblings, 0 replies; 200+ results
From: Carrillo, Erik G @ 2021-09-07 21:02 UTC (permalink / raw)
To: Pavan Nikhilesh Bhagavatula, Stephen Hemminger
Cc: Jerin Jacob Kollanukkaran, Ananyev, Konstantin, dev
> -----Original Message-----
> From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> Sent: Wednesday, September 1, 2021 1:48 AM
> To: Stephen Hemminger <stephen@networkplumber.org>; Carrillo, Erik G
> <erik.g.carrillo@intel.com>
> Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; dev@dpdk.org
> Subject: RE: [EXT] Re: [dpdk-dev] [RFC 11/15] eventdev: reserve fields in
> timer object
>
> >On Tue, 24 Aug 2021 01:10:15 +0530
> ><pbhagavatula@marvell.com> wrote:
> >
> >> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >>
> >> Reserve fields in rte_event_timer data structure to address future
> >> use cases.
> >> Also, remove volatile from rte_event_timer.
> >>
> >> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >
> >Reserve fields are not a good idea. They don't solve future API/ABI
> >problems.
> >
> >The issue is that you need to zero them and check they are zero
> >otherwise they can't safely be used later. This happened with the
> >Linux kernel system calls where in several cases a flag field was added
> >for future.
> >The problem is that old programs would work with any garbage in the
> >flag field, and therefore the flag could not be extended.
>
> The change is in rte_event_timer which is a fastpath object similar to
> rte_mbuf.
> I think fast path objects don't have the above mentioned caveat.
>
> >
> >A better way to make structures internal opaque objects that can be
> >resized. Why is rte_event_timer_adapter exposed in API?
>
> rte_event_timer_adapter has similar API semantics of rte_mempool, the
> objects of the adapter are rte_event_timer objects which have information
> of the timer state.
>
> Erik,
>
> I think we should move the fields in current rte_event_timer structure
> around, currently we have
>
> struct rte_event_timer {
> struct rte_event ev;
> volatile enum rte_event_timer_state state;
> x-------x 4 byte hole x---------x
> uint64_t timeout_ticks;
> uint64_t impl_opaque[2];
> uint8_t user_meta[0];
> } __rte_cache_aligned;
>
> Move to
>
> struct rte_event_timer {
> struct rte_event ev;
> uint64_t timeout_ticks;
> uint64_t impl_opaque[2];
> uint64_t rsvd;
> enum rte_event_timer_state state;
> uint8_t user_meta[0];
> } __rte_cache_aligned;
>
> Since its cache aligned, the size doesn't change.
>
> Thoughts?
>
I'm not seeing any problem with rearranging the members. However, you originally had " uint64_t rsvd[2];" and above, it's just one variable. Did you mean to make it an array?
The following also appears to have no holes:
$ pahole -C rte_event_timer eventdev_rte_event_timer_adapter.c.o
struct rte_event_timer {
struct rte_event ev; /* 0 16 */
uint64_t timeout_ticks; /* 16 8 */
uint64_t impl_opaque[2]; /* 24 16 */
uint64_t rsvd[2]; /* 40 16 */
enum rte_event_timer_state state; /* 56 4 */
uint8_t user_meta[]; /* 60 0 */
/* size: 64, cachelines: 1, members: 6 */
/* padding: 4 */
} __attribute__((__aligned__(64)));
Thanks,
Erik
> Thanks,
> Pavan.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v4 4/4] cryptodev: expose driver interface as internal
2021-09-07 19:22 3% ` [dpdk-dev] [PATCH v4 2/4] cryptodev: change valid dev API Akhil Goyal
@ 2021-09-07 19:22 2% ` Akhil Goyal
1 sibling, 0 replies; 200+ results
From: Akhil Goyal @ 2021-09-07 19:22 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, roy.fan.zhang, asomalap,
ruifeng.wang, ajit.khaparde, pablo.de.lara.guarch, fiona.trahe,
adwivedi, michaelsh, rnagadheeraj, jianjay.zhou, Akhil Goyal
The rte_cryptodev_pmd.* files are for drivers only and should be
private to DPDK, and not installed for app use.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
doc/guides/rel_notes/release_21_11.rst | 4 +++
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 2 +-
drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c | 2 +-
drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 2 +-
.../crypto/aesni_mb/rte_aesni_mb_pmd_ops.c | 2 +-
drivers/crypto/armv8/rte_armv8_pmd.c | 2 +-
drivers/crypto/armv8/rte_armv8_pmd_ops.c | 2 +-
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 2 +-
drivers/crypto/bcmfs/bcmfs_sym_session.h | 2 +-
drivers/crypto/caam_jr/caam_jr.c | 2 +-
drivers/crypto/ccp/ccp_crypto.c | 2 +-
drivers/crypto/ccp/ccp_pmd_ops.c | 2 +-
drivers/crypto/ccp/rte_ccp_pmd.c | 2 +-
drivers/crypto/cnxk/cn10k_cryptodev.c | 2 +-
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 2 +-
drivers/crypto/cnxk/cn10k_cryptodev_ops.h | 2 +-
drivers/crypto/cnxk/cn9k_cryptodev.c | 2 +-
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 2 +-
drivers/crypto/cnxk/cn9k_cryptodev_ops.h | 2 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 2 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 2 +-
drivers/crypto/dpaa_sec/dpaa_sec.c | 2 +-
drivers/crypto/kasumi/rte_kasumi_pmd.c | 2 +-
drivers/crypto/kasumi/rte_kasumi_pmd_ops.c | 2 +-
drivers/crypto/mlx5/mlx5_crypto.h | 2 +-
drivers/crypto/mvsam/rte_mrvl_pmd.c | 2 +-
drivers/crypto/mvsam/rte_mrvl_pmd_ops.c | 2 +-
drivers/crypto/nitrox/nitrox_sym.c | 2 +-
drivers/crypto/null/null_crypto_pmd.c | 2 +-
drivers/crypto/null/null_crypto_pmd_ops.c | 2 +-
drivers/crypto/octeontx/otx_cryptodev.c | 2 +-
drivers/crypto/octeontx/otx_cryptodev_ops.c | 2 +-
drivers/crypto/octeontx2/otx2_cryptodev.c | 2 +-
drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 2 +-
drivers/crypto/octeontx2/otx2_cryptodev_ops.h | 2 +-
drivers/crypto/openssl/rte_openssl_pmd.c | 2 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 2 +-
drivers/crypto/qat/qat_asym.h | 2 +-
drivers/crypto/qat/qat_asym_pmd.c | 2 +-
drivers/crypto/qat/qat_sym.h | 2 +-
drivers/crypto/qat/qat_sym_hw_dp.c | 2 +-
drivers/crypto/qat/qat_sym_pmd.c | 2 +-
drivers/crypto/qat/qat_sym_session.h | 2 +-
.../scheduler/rte_cryptodev_scheduler.c | 2 +-
drivers/crypto/scheduler/scheduler_pmd.c | 2 +-
drivers/crypto/scheduler/scheduler_pmd_ops.c | 2 +-
drivers/crypto/snow3g/rte_snow3g_pmd.c | 2 +-
drivers/crypto/snow3g/rte_snow3g_pmd_ops.c | 2 +-
drivers/crypto/virtio/virtio_cryptodev.c | 2 +-
drivers/crypto/virtio/virtio_rxtx.c | 2 +-
drivers/crypto/zuc/rte_zuc_pmd.c | 2 +-
drivers/crypto/zuc/rte_zuc_pmd_ops.c | 2 +-
.../octeontx2/otx2_evdev_crypto_adptr_rx.h | 2 +-
.../octeontx2/otx2_evdev_crypto_adptr_tx.h | 2 +-
.../net/softnic/rte_eth_softnic_cryptodev.c | 2 +-
.../{rte_cryptodev_pmd.c => cryptodev_pmd.c} | 2 +-
.../{rte_cryptodev_pmd.h => cryptodev_pmd.h} | 16 +++++++++---
lib/cryptodev/meson.build | 18 ++++++++++---
lib/cryptodev/rte_cryptodev.c | 2 +-
lib/cryptodev/version.map | 25 +++++++++++--------
lib/eventdev/rte_event_crypto_adapter.c | 2 +-
lib/eventdev/rte_eventdev.c | 2 +-
lib/pipeline/rte_table_action.c | 2 +-
64 files changed, 105 insertions(+), 79 deletions(-)
rename lib/cryptodev/{rte_cryptodev_pmd.c => cryptodev_pmd.c} (99%)
rename lib/cryptodev/{rte_cryptodev_pmd.h => cryptodev_pmd.h} (98%)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b..59445a6f42 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -223,9 +223,6 @@ Deprecation Notices
session and the private data of session. An opaque pointer can be exposed
directly to application which can be attached to the ``rte_crypto_op``.
-* cryptodev: The interface between library and drivers will be marked
- as internal in DPDK 21.11.
-
* security: Hide structure ``rte_security_session`` and expose an opaque
pointer for the private data to the application which can be attached
to the packet while enqueuing.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index e7ad50ba09..8785b25ff6 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -106,6 +106,10 @@ API Changes
rte_cryptodev_is_valid_dev as it can be used by the application as
well as PMD to check whether the device is valid or not.
+* cryptodev: The rte_cryptodev_pmd.* files are renamed as cryptodev_pmd.*
+ as it is for drivers only and should be private to DPDK, and not
+ installed for app use.
+
ABI Changes
-----------
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 886e2a5aaa..330aad8157 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -5,7 +5,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
#include <rte_cpuflags.h>
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
index 18dbc4c18c..edb7275e76 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "aesni_gcm_pmd_private.h"
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index a01c826a3c..60963a8208 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -7,7 +7,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
#include <rte_cpuflags.h>
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
index fc7fdfec8e..48a8f91868 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -8,7 +8,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
#include <rte_ether.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "aesni_mb_pmd_private.h"
diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c
index c642ac350f..36a1a9bb4f 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd.c
@@ -7,7 +7,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
#include <rte_cpuflags.h>
diff --git a/drivers/crypto/armv8/rte_armv8_pmd_ops.c b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
index 01ccfb4b23..1b2749fe62 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd_ops.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "armv8_pmd_private.h"
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index aa7fad6d70..d1dd22823e 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -7,7 +7,7 @@
#include <rte_dev.h>
#include <rte_errno.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "bcmfs_device.h"
#include "bcmfs_logs.h"
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.h b/drivers/crypto/bcmfs/bcmfs_sym_session.h
index 8240c6fc25..d40595b4bd 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_session.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.h
@@ -8,7 +8,7 @@
#include <stdbool.h>
#include <rte_crypto.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "bcmfs_sym_defs.h"
#include "bcmfs_sym_req.h"
diff --git a/drivers/crypto/caam_jr/caam_jr.c b/drivers/crypto/caam_jr/caam_jr.c
index 3fb3fe0f8a..258750afe7 100644
--- a/drivers/crypto/caam_jr/caam_jr.c
+++ b/drivers/crypto/caam_jr/caam_jr.c
@@ -9,7 +9,7 @@
#include <rte_byteorder.h>
#include <rte_common.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_crypto.h>
#include <rte_cryptodev.h>
#include <rte_bus_vdev.h>
diff --git a/drivers/crypto/ccp/ccp_crypto.c b/drivers/crypto/ccp/ccp_crypto.c
index f37d35f18f..70daed791e 100644
--- a/drivers/crypto/ccp/ccp_crypto.c
+++ b/drivers/crypto/ccp/ccp_crypto.c
@@ -20,7 +20,7 @@
#include <rte_memory.h>
#include <rte_spinlock.h>
#include <rte_string_fns.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "ccp_dev.h"
#include "ccp_crypto.h"
diff --git a/drivers/crypto/ccp/ccp_pmd_ops.c b/drivers/crypto/ccp/ccp_pmd_ops.c
index 98f964f361..0d615d311c 100644
--- a/drivers/crypto/ccp/ccp_pmd_ops.c
+++ b/drivers/crypto/ccp/ccp_pmd_ops.c
@@ -5,7 +5,7 @@
#include <string.h>
#include <rte_common.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_malloc.h>
#include "ccp_pmd_private.h"
diff --git a/drivers/crypto/ccp/rte_ccp_pmd.c b/drivers/crypto/ccp/rte_ccp_pmd.c
index ab9416942e..a54d81de46 100644
--- a/drivers/crypto/ccp/rte_ccp_pmd.c
+++ b/drivers/crypto/ccp/rte_ccp_pmd.c
@@ -7,7 +7,7 @@
#include <rte_bus_vdev.h>
#include <rte_common.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_pci.h>
#include <rte_dev.h>
#include <rte_malloc.h>
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev.c b/drivers/crypto/cnxk/cn10k_cryptodev.c
index db7b5aa7c6..012eb0c051 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_crypto.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_dev.h>
#include <rte_pci.h>
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index cccca77f60..3a1a4a2e29 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -3,7 +3,7 @@
*/
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_event_crypto_adapter.h>
#include <rte_ip.h>
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
index b03d2eee14..d7e9f87396 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
@@ -6,7 +6,7 @@
#define _CN10K_CRYPTODEV_OPS_H_
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
extern struct rte_cryptodev_ops cn10k_cpt_ops;
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev.c b/drivers/crypto/cnxk/cn9k_cryptodev.c
index e60b352fac..6b8cb01a12 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_crypto.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_dev.h>
#include <rte_pci.h>
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index 40109acc3f..75277936b0 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -3,7 +3,7 @@
*/
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_event_crypto_adapter.h>
#include <rte_ip.h>
#include <rte_vect.h>
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.h b/drivers/crypto/cnxk/cn9k_cryptodev_ops.h
index 1255de33ae..309f507346 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.h
@@ -5,7 +5,7 @@
#ifndef _CN9K_CRYPTODEV_OPS_H_
#define _CN9K_CRYPTODEV_OPS_H_
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
extern struct rte_cryptodev_ops cn9k_cpt_ops;
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index 957c78063f..41d8fe49e1 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -3,7 +3,7 @@
*/
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_errno.h>
#include "roc_cpt.h"
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 1ccead3641..bf69c61916 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -18,7 +18,7 @@
#include <rte_cycles.h>
#include <rte_kvargs.h>
#include <rte_dev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_common.h>
#include <rte_fslmc.h>
#include <fslmc_vfio.h>
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 19d4684e24..3d53746ef1 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -12,7 +12,7 @@
#include <rte_byteorder.h>
#include <rte_common.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_crypto.h>
#include <rte_cryptodev.h>
#ifdef RTE_LIB_SECURITY
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
index 48b7db9e9b..d6f927417a 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -5,7 +5,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
#include <rte_cpuflags.h>
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
index 43376c1aa0..f075054807 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "kasumi_pmd_private.h"
diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h
index 722acb8d19..d589e0ac3d 100644
--- a/drivers/crypto/mlx5/mlx5_crypto.h
+++ b/drivers/crypto/mlx5/mlx5_crypto.h
@@ -8,7 +8,7 @@
#include <stdbool.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <mlx5_common_utils.h>
#include <mlx5_common_devx.h>
diff --git a/drivers/crypto/mvsam/rte_mrvl_pmd.c b/drivers/crypto/mvsam/rte_mrvl_pmd.c
index ed85bb6547..a72642a772 100644
--- a/drivers/crypto/mvsam/rte_mrvl_pmd.c
+++ b/drivers/crypto/mvsam/rte_mrvl_pmd.c
@@ -7,7 +7,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_security_driver.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
diff --git a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
index fa36461276..3064b1f136 100644
--- a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
+++ b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
@@ -8,7 +8,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_security_driver.h>
#include "mrvl_pmd_private.h"
diff --git a/drivers/crypto/nitrox/nitrox_sym.c b/drivers/crypto/nitrox/nitrox_sym.c
index 2768bdd2e1..f8b7edcd69 100644
--- a/drivers/crypto/nitrox/nitrox_sym.c
+++ b/drivers/crypto/nitrox/nitrox_sym.c
@@ -4,7 +4,7 @@
#include <stdbool.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_crypto.h>
#include "nitrox_sym.h"
diff --git a/drivers/crypto/null/null_crypto_pmd.c b/drivers/crypto/null/null_crypto_pmd.c
index 179e5ff36d..f9935d52cc 100644
--- a/drivers/crypto/null/null_crypto_pmd.c
+++ b/drivers/crypto/null/null_crypto_pmd.c
@@ -3,7 +3,7 @@
*/
#include <rte_common.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
diff --git a/drivers/crypto/null/null_crypto_pmd_ops.c b/drivers/crypto/null/null_crypto_pmd_ops.c
index d67892a1bb..a8b5a06e7f 100644
--- a/drivers/crypto/null/null_crypto_pmd_ops.c
+++ b/drivers/crypto/null/null_crypto_pmd_ops.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "null_crypto_pmd_private.h"
diff --git a/drivers/crypto/octeontx/otx_cryptodev.c b/drivers/crypto/octeontx/otx_cryptodev.c
index 3822c0d779..c294f86d79 100644
--- a/drivers/crypto/octeontx/otx_cryptodev.c
+++ b/drivers/crypto/octeontx/otx_cryptodev.c
@@ -5,7 +5,7 @@
#include <rte_bus_pci.h>
#include <rte_common.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_log.h>
#include <rte_pci.h>
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
index eac6796cfb..9b5bde53f8 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
@@ -5,7 +5,7 @@
#include <rte_alarm.h>
#include <rte_bus_pci.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_eventdev.h>
#include <rte_event_crypto_adapter.h>
#include <rte_errno.h>
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev.c b/drivers/crypto/octeontx2/otx2_cryptodev.c
index 75fb4f9a3b..85b1f00263 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_crypto.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_dev.h>
#include <rte_errno.h>
#include <rte_mempool.h>
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 42100154cd..09ddbb5f34 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -4,7 +4,7 @@
#include <unistd.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_errno.h>
#include <rte_ethdev.h>
#include <rte_event_crypto_adapter.h>
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.h b/drivers/crypto/octeontx2/otx2_cryptodev_ops.h
index 1970187f88..8d135909b3 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.h
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.h
@@ -5,7 +5,7 @@
#ifndef _OTX2_CRYPTODEV_OPS_H_
#define _OTX2_CRYPTODEV_OPS_H_
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#define OTX2_CPT_MIN_HEADROOM_REQ 24
#define OTX2_CPT_MIN_TAILROOM_REQ 8
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 37b969b916..13c6ea8724 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -5,7 +5,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
#include <rte_cpuflags.h>
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index ed75877581..52715f86f8 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "openssl_pmd_private.h"
#include "compat.h"
diff --git a/drivers/crypto/qat/qat_asym.h b/drivers/crypto/qat/qat_asym.h
index 2838aee76f..308b6b2e0b 100644
--- a/drivers/crypto/qat/qat_asym.h
+++ b/drivers/crypto/qat/qat_asym.h
@@ -5,7 +5,7 @@
#ifndef _QAT_ASYM_H_
#define _QAT_ASYM_H_
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_crypto_asym.h>
#include "icp_qat_fw_pke.h"
#include "qat_common.h"
diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c
index 0c25cce09e..e91bb0d317 100644
--- a/drivers/crypto/qat/qat_asym_pmd.c
+++ b/drivers/crypto/qat/qat_asym_pmd.c
@@ -2,7 +2,7 @@
* Copyright(c) 2019 Intel Corporation
*/
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "qat_logs.h"
diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h
index 20b1b53d36..e3ec7f0de4 100644
--- a/drivers/crypto/qat/qat_sym.h
+++ b/drivers/crypto/qat/qat_sym.h
@@ -5,7 +5,7 @@
#ifndef _QAT_SYM_H_
#define _QAT_SYM_H_
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#ifdef RTE_LIB_SECURITY
#include <rte_net_crc.h>
#endif
diff --git a/drivers/crypto/qat/qat_sym_hw_dp.c b/drivers/crypto/qat/qat_sym_hw_dp.c
index ac9ac05363..36d11e0dc9 100644
--- a/drivers/crypto/qat/qat_sym_hw_dp.c
+++ b/drivers/crypto/qat/qat_sym_hw_dp.c
@@ -2,7 +2,7 @@
* Copyright(c) 2020 Intel Corporation
*/
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "adf_transport_access_macros.h"
#include "icp_qat_fw.h"
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index 6868e5f001..efda921c05 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -7,7 +7,7 @@
#include <rte_dev.h>
#include <rte_malloc.h>
#include <rte_pci.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#ifdef RTE_LIB_SECURITY
#include <rte_security_driver.h>
#endif
diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h
index 33b236e49b..6ebc176729 100644
--- a/drivers/crypto/qat/qat_sym_session.h
+++ b/drivers/crypto/qat/qat_sym_session.h
@@ -5,7 +5,7 @@
#define _QAT_SYM_SESSION_H_
#include <rte_crypto.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#ifdef RTE_LIB_SECURITY
#include <rte_security.h>
#endif
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
index 1e0b4df0ca..1e0c4fe464 100644
--- a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
@@ -4,7 +4,7 @@
#include <rte_string_fns.h>
#include <rte_reorder.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_malloc.h>
#include "rte_cryptodev_scheduler.h"
diff --git a/drivers/crypto/scheduler/scheduler_pmd.c b/drivers/crypto/scheduler/scheduler_pmd.c
index 632197833c..560c26af50 100644
--- a/drivers/crypto/scheduler/scheduler_pmd.c
+++ b/drivers/crypto/scheduler/scheduler_pmd.c
@@ -4,7 +4,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
#include <rte_cpuflags.h>
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
index cb125e8027..465b88ade8 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -7,7 +7,7 @@
#include <rte_malloc.h>
#include <rte_dev.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_reorder.h>
#include "scheduler_pmd_private.h"
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c
index 9aab357846..8284ac0b66 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c
@@ -5,7 +5,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
#include <rte_cpuflags.h>
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
index 906a0fe60b..3f46014b7d 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "snow3g_pmd_private.h"
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 4bae74a487..8faa39df4a 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -9,7 +9,7 @@
#include <rte_pci.h>
#include <rte_bus_pci.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_eal.h>
#include "virtio_cryptodev.h"
diff --git a/drivers/crypto/virtio/virtio_rxtx.c b/drivers/crypto/virtio/virtio_rxtx.c
index e1cb4ad104..a65524a306 100644
--- a/drivers/crypto/virtio/virtio_rxtx.c
+++ b/drivers/crypto/virtio/virtio_rxtx.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright(c) 2018 HUAWEI TECHNOLOGIES CO., LTD.
*/
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "virtqueue.h"
#include "virtio_cryptodev.h"
diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c
index 42b669f188..d4b343a7af 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd.c
@@ -5,7 +5,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
#include <rte_cpuflags.h>
diff --git a/drivers/crypto/zuc/rte_zuc_pmd_ops.c b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
index dc9dc25ef8..38642d45ab 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "zuc_pmd_private.h"
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
index a543225376..b33cb7e139 100644
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
+++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
@@ -6,7 +6,7 @@
#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_eventdev.h>
#include "cpt_pmd_logs.h"
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
index ecf7eb9f56..1fc56f903b 100644
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
+++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
@@ -6,7 +6,7 @@
#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_event_crypto_adapter.h>
#include <rte_eventdev.h>
diff --git a/drivers/net/softnic/rte_eth_softnic_cryptodev.c b/drivers/net/softnic/rte_eth_softnic_cryptodev.c
index 8e278801c5..9a7d006f1a 100644
--- a/drivers/net/softnic/rte_eth_softnic_cryptodev.c
+++ b/drivers/net/softnic/rte_eth_softnic_cryptodev.c
@@ -6,7 +6,7 @@
#include <stdio.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_string_fns.h>
#include "rte_eth_softnic_internals.h"
diff --git a/lib/cryptodev/rte_cryptodev_pmd.c b/lib/cryptodev/cryptodev_pmd.c
similarity index 99%
rename from lib/cryptodev/rte_cryptodev_pmd.c
rename to lib/cryptodev/cryptodev_pmd.c
index e342daabc4..71e34140cd 100644
--- a/lib/cryptodev/rte_cryptodev_pmd.c
+++ b/lib/cryptodev/cryptodev_pmd.c
@@ -5,7 +5,7 @@
#include <rte_string_fns.h>
#include <rte_malloc.h>
-#include "rte_cryptodev_pmd.h"
+#include "cryptodev_pmd.h"
/**
* Parse name from argument
diff --git a/lib/cryptodev/rte_cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
similarity index 98%
rename from lib/cryptodev/rte_cryptodev_pmd.h
rename to lib/cryptodev/cryptodev_pmd.h
index dd2a4940a2..ec7bb82be8 100644
--- a/lib/cryptodev/rte_cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -2,8 +2,8 @@
* Copyright(c) 2015-2020 Intel Corporation.
*/
-#ifndef _RTE_CRYPTODEV_PMD_H_
-#define _RTE_CRYPTODEV_PMD_H_
+#ifndef _CRYPTODEV_PMD_H_
+#define _CRYPTODEV_PMD_H_
/** @file
* RTE Crypto PMD APIs
@@ -80,6 +80,7 @@ struct cryptodev_driver {
* @return
* - The rte_cryptodev structure pointer for the given device ID.
*/
+__rte_internal
struct rte_cryptodev *
rte_cryptodev_pmd_get_dev(uint8_t dev_id);
@@ -91,6 +92,7 @@ rte_cryptodev_pmd_get_dev(uint8_t dev_id);
* @return
* - The rte_cryptodev structure pointer for the given device ID.
*/
+__rte_internal
struct rte_cryptodev *
rte_cryptodev_pmd_get_named_dev(const char *name);
@@ -401,6 +403,7 @@ struct rte_cryptodev_ops {
* @return
* - Slot in the rte_dev_devices array for a new device;
*/
+__rte_internal
struct rte_cryptodev *
rte_cryptodev_pmd_allocate(const char *name, int socket_id);
@@ -414,6 +417,7 @@ rte_cryptodev_pmd_allocate(const char *name, int socket_id);
* @return
* - 0 on success, negative on error
*/
+__rte_internal
extern int
rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
@@ -435,6 +439,7 @@ rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
* - 0 on success
* - errno on failure
*/
+__rte_internal
int
rte_cryptodev_pmd_parse_input_args(
struct rte_cryptodev_pmd_init_params *params,
@@ -454,6 +459,7 @@ rte_cryptodev_pmd_parse_input_args(
* - crypto device instance on success
* - NULL on creation failure
*/
+__rte_internal
struct rte_cryptodev *
rte_cryptodev_pmd_create(const char *name,
struct rte_device *device,
@@ -471,6 +477,7 @@ rte_cryptodev_pmd_create(const char *name,
* - 0 on success
* - errno on failure
*/
+__rte_internal
int
rte_cryptodev_pmd_destroy(struct rte_cryptodev *cryptodev);
@@ -484,6 +491,7 @@ rte_cryptodev_pmd_destroy(struct rte_cryptodev *cryptodev);
* @return
* void
*/
+__rte_internal
void rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
enum rte_cryptodev_event_type event);
@@ -491,6 +499,7 @@ void rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
* @internal
* Create unique device name
*/
+__rte_internal
int
rte_cryptodev_pmd_create_dev_name(char *name, const char *dev_name_prefix);
@@ -506,6 +515,7 @@ rte_cryptodev_pmd_create_dev_name(char *name, const char *dev_name_prefix);
* @return
* The driver type identifier
*/
+__rte_internal
uint8_t rte_cryptodev_allocate_driver(struct cryptodev_driver *crypto_drv,
const struct rte_driver *drv);
@@ -555,4 +565,4 @@ set_asym_session_private_data(struct rte_cryptodev_asym_session *sess,
}
#endif
-#endif /* _RTE_CRYPTODEV_PMD_H_ */
+#endif /* _CRYPTODEV_PMD_H_ */
diff --git a/lib/cryptodev/meson.build b/lib/cryptodev/meson.build
index bec80beab3..735935df4a 100644
--- a/lib/cryptodev/meson.build
+++ b/lib/cryptodev/meson.build
@@ -1,12 +1,22 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2017-2019 Intel Corporation
-sources = files('rte_cryptodev.c', 'rte_cryptodev_pmd.c', 'cryptodev_trace_points.c')
-headers = files('rte_cryptodev.h',
- 'rte_cryptodev_pmd.h',
+sources = files(
+ 'cryptodev_pmd.c',
+ 'cryptodev_trace_points.c',
+ 'rte_cryptodev.c',
+)
+headers = files(
+ 'rte_cryptodev.h',
'rte_cryptodev_trace.h',
'rte_cryptodev_trace_fp.h',
'rte_crypto.h',
'rte_crypto_sym.h',
- 'rte_crypto_asym.h')
+ 'rte_crypto_asym.h',
+)
+
+driver_sdk_headers += files(
+ 'cryptodev_pmd.h',
+)
+
deps += ['kvargs', 'mbuf', 'rcu']
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index 37502b9b3c..9fa3aff1d3 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -39,7 +39,7 @@
#include "rte_crypto.h"
#include "rte_cryptodev.h"
-#include "rte_cryptodev_pmd.h"
+#include "cryptodev_pmd.h"
#include "rte_cryptodev_trace.h"
static uint8_t nb_drivers;
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index 1a7f759c57..8294c9f64f 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -8,7 +8,6 @@ DPDK_22 {
rte_crypto_cipher_algorithm_strings;
rte_crypto_cipher_operation_strings;
rte_crypto_op_pool_create;
- rte_cryptodev_allocate_driver;
rte_cryptodev_callback_register;
rte_cryptodev_callback_unregister;
rte_cryptodev_close;
@@ -27,15 +26,6 @@ DPDK_22 {
rte_cryptodev_info_get;
rte_cryptodev_is_valid_dev;
rte_cryptodev_name_get;
- rte_cryptodev_pmd_allocate;
- rte_cryptodev_pmd_callback_process;
- rte_cryptodev_pmd_create;
- rte_cryptodev_pmd_create_dev_name;
- rte_cryptodev_pmd_destroy;
- rte_cryptodev_pmd_get_dev;
- rte_cryptodev_pmd_get_named_dev;
- rte_cryptodev_pmd_parse_input_args;
- rte_cryptodev_pmd_release_device;
rte_cryptodev_queue_pair_count;
rte_cryptodev_queue_pair_setup;
rte_cryptodev_socket_id;
@@ -117,3 +107,18 @@ EXPERIMENTAL {
rte_cryptodev_remove_enq_callback;
};
+
+INTERNAL {
+ global:
+
+ rte_cryptodev_allocate_driver;
+ rte_cryptodev_pmd_allocate;
+ rte_cryptodev_pmd_callback_process;
+ rte_cryptodev_pmd_create;
+ rte_cryptodev_pmd_create_dev_name;
+ rte_cryptodev_pmd_destroy;
+ rte_cryptodev_pmd_get_dev;
+ rte_cryptodev_pmd_get_named_dev;
+ rte_cryptodev_pmd_parse_input_args;
+ rte_cryptodev_pmd_release_device;
+};
diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c
index 2d38389858..ebfc8326a8 100644
--- a/lib/eventdev/rte_event_crypto_adapter.c
+++ b/lib/eventdev/rte_event_crypto_adapter.c
@@ -9,7 +9,7 @@
#include <rte_dev.h>
#include <rte_errno.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_log.h>
#include <rte_malloc.h>
#include <rte_service_component.h>
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index cb0ed7b620..e347d6dfd5 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -31,7 +31,7 @@
#include <rte_errno.h>
#include <rte_ethdev.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_telemetry.h>
#include "rte_eventdev.h"
diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
index 54721ed96a..ad7904c0ee 100644
--- a/lib/pipeline/rte_table_action.c
+++ b/lib/pipeline/rte_table_action.c
@@ -16,7 +16,7 @@
#include <rte_udp.h>
#include <rte_vxlan.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "rte_table_action.h"
--
2.25.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v4 2/4] cryptodev: change valid dev API
@ 2021-09-07 19:22 3% ` Akhil Goyal
2021-09-08 15:16 0% ` Akhil Goyal
2021-09-07 19:22 2% ` [dpdk-dev] [PATCH v4 4/4] cryptodev: expose driver interface as internal Akhil Goyal
1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2021-09-07 19:22 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, roy.fan.zhang, asomalap,
ruifeng.wang, ajit.khaparde, pablo.de.lara.guarch, fiona.trahe,
adwivedi, michaelsh, rnagadheeraj, jianjay.zhou, Akhil Goyal
The API rte_cryptodev_pmd_is_valid_dev, can be used
by the application as well as PMD to check whether
the device is valid or not. Hence, _pmd is removed
from the API.
The applications and drivers which use this API are
also updated.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
doc/guides/rel_notes/release_21_11.rst | 4 ++
.../net/softnic/rte_eth_softnic_cryptodev.c | 2 +-
examples/fips_validation/main.c | 2 +-
examples/ip_pipeline/cryptodev.c | 3 +-
lib/cryptodev/rte_cryptodev.c | 50 +++++++++----------
lib/cryptodev/rte_cryptodev.h | 11 ++++
lib/cryptodev/rte_cryptodev_pmd.h | 11 ----
lib/cryptodev/version.map | 2 +-
lib/eventdev/rte_event_crypto_adapter.c | 4 +-
lib/eventdev/rte_eventdev.c | 2 +-
lib/pipeline/rte_table_action.c | 2 +-
11 files changed, 48 insertions(+), 45 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 411fa9530a..e7ad50ba09 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -102,6 +102,10 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* cryptodev: The API rte_cryptodev_pmd_is_valid_dev is modified to
+ rte_cryptodev_is_valid_dev as it can be used by the application as
+ well as PMD to check whether the device is valid or not.
+
ABI Changes
-----------
diff --git a/drivers/net/softnic/rte_eth_softnic_cryptodev.c b/drivers/net/softnic/rte_eth_softnic_cryptodev.c
index a1a4ca5650..8e278801c5 100644
--- a/drivers/net/softnic/rte_eth_softnic_cryptodev.c
+++ b/drivers/net/softnic/rte_eth_softnic_cryptodev.c
@@ -82,7 +82,7 @@ softnic_cryptodev_create(struct pmd_internals *p,
dev_id = (uint32_t)status;
} else {
- if (rte_cryptodev_pmd_is_valid_dev(params->dev_id) == 0)
+ if (rte_cryptodev_is_valid_dev(params->dev_id) == 0)
return NULL;
dev_id = params->dev_id;
diff --git a/examples/fips_validation/main.c b/examples/fips_validation/main.c
index c175fe6ac2..e892078f0e 100644
--- a/examples/fips_validation/main.c
+++ b/examples/fips_validation/main.c
@@ -196,7 +196,7 @@ parse_cryptodev_id_arg(char *arg)
}
- if (!rte_cryptodev_pmd_is_valid_dev(cryptodev_id)) {
+ if (!rte_cryptodev_is_valid_dev(cryptodev_id)) {
RTE_LOG(ERR, USER1, "Error %i: invalid cryptodev id %s\n",
cryptodev_id, arg);
return -1;
diff --git a/examples/ip_pipeline/cryptodev.c b/examples/ip_pipeline/cryptodev.c
index b0d9f3d217..9997d97456 100644
--- a/examples/ip_pipeline/cryptodev.c
+++ b/examples/ip_pipeline/cryptodev.c
@@ -6,7 +6,6 @@
#include <stdio.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
#include <rte_string_fns.h>
#include "cryptodev.h"
@@ -74,7 +73,7 @@ cryptodev_create(const char *name, struct cryptodev_params *params)
dev_id = (uint32_t)status;
} else {
- if (rte_cryptodev_pmd_is_valid_dev(params->dev_id) == 0)
+ if (rte_cryptodev_is_valid_dev(params->dev_id) == 0)
return NULL;
dev_id = params->dev_id;
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index 447aa9d519..37502b9b3c 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -663,7 +663,7 @@ rte_cryptodev_is_valid_device_data(uint8_t dev_id)
}
unsigned int
-rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id)
+rte_cryptodev_is_valid_dev(uint8_t dev_id)
{
struct rte_cryptodev *dev = NULL;
@@ -761,7 +761,7 @@ rte_cryptodev_socket_id(uint8_t dev_id)
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+ if (!rte_cryptodev_is_valid_dev(dev_id))
return -1;
dev = rte_cryptodev_pmd_get_dev(dev_id);
@@ -1032,7 +1032,7 @@ rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
struct rte_cryptodev *dev;
int diag;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1080,7 +1080,7 @@ rte_cryptodev_start(uint8_t dev_id)
CDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1110,7 +1110,7 @@ rte_cryptodev_stop(uint8_t dev_id)
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return;
}
@@ -1136,7 +1136,7 @@ rte_cryptodev_close(uint8_t dev_id)
struct rte_cryptodev *dev;
int retval;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -1;
}
@@ -1176,7 +1176,7 @@ rte_cryptodev_get_qp_status(uint8_t dev_id, uint16_t queue_pair_id)
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1207,7 +1207,7 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1283,7 +1283,7 @@ rte_cryptodev_add_enq_callback(uint8_t dev_id,
return NULL;
}
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
rte_errno = ENODEV;
return NULL;
@@ -1349,7 +1349,7 @@ rte_cryptodev_remove_enq_callback(uint8_t dev_id,
return -EINVAL;
}
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
return -ENODEV;
}
@@ -1418,7 +1418,7 @@ rte_cryptodev_add_deq_callback(uint8_t dev_id,
return NULL;
}
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
rte_errno = ENODEV;
return NULL;
@@ -1484,7 +1484,7 @@ rte_cryptodev_remove_deq_callback(uint8_t dev_id,
return -EINVAL;
}
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
return -ENODEV;
}
@@ -1542,7 +1542,7 @@ rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
return -ENODEV;
}
@@ -1565,7 +1565,7 @@ rte_cryptodev_stats_reset(uint8_t dev_id)
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return;
}
@@ -1581,7 +1581,7 @@ rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
return;
}
@@ -1608,7 +1608,7 @@ rte_cryptodev_callback_register(uint8_t dev_id,
if (!cb_fn)
return -EINVAL;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1652,7 +1652,7 @@ rte_cryptodev_callback_unregister(uint8_t dev_id,
if (!cb_fn)
return -EINVAL;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1720,7 +1720,7 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
uint8_t index;
int ret;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1765,7 +1765,7 @@ rte_cryptodev_asym_session_init(uint8_t dev_id,
uint8_t index;
int ret;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1939,7 +1939,7 @@ rte_cryptodev_sym_session_clear(uint8_t dev_id,
struct rte_cryptodev *dev;
uint8_t driver_id;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1969,7 +1969,7 @@ rte_cryptodev_asym_session_clear(uint8_t dev_id,
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -2079,7 +2079,7 @@ rte_cryptodev_sym_get_private_session_size(uint8_t dev_id)
struct rte_cryptodev *dev;
unsigned int priv_sess_size;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+ if (!rte_cryptodev_is_valid_dev(dev_id))
return 0;
dev = rte_cryptodev_pmd_get_dev(dev_id);
@@ -2099,7 +2099,7 @@ rte_cryptodev_asym_get_private_session_size(uint8_t dev_id)
unsigned int header_size = sizeof(void *) * nb_drivers;
unsigned int priv_sess_size;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+ if (!rte_cryptodev_is_valid_dev(dev_id))
return 0;
dev = rte_cryptodev_pmd_get_dev(dev_id);
@@ -2156,7 +2156,7 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
sym_crypto_fill_status(vec, EINVAL);
return 0;
}
@@ -2179,7 +2179,7 @@ rte_cryptodev_get_raw_dp_ctx_size(uint8_t dev_id)
int32_t size = sizeof(struct rte_crypto_raw_dp_ctx);
int32_t priv_size;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+ if (!rte_cryptodev_is_valid_dev(dev_id))
return -EINVAL;
dev = rte_cryptodev_pmd_get_dev(dev_id);
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 11f4e6fdbf..bb01f0f195 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -1368,6 +1368,17 @@ __rte_experimental
unsigned int
rte_cryptodev_asym_get_private_session_size(uint8_t dev_id);
+/**
+ * Validate if the crypto device index is valid attached crypto device.
+ *
+ * @param dev_id Crypto device index.
+ *
+ * @return
+ * - If the device index is valid (1) or not (0).
+ */
+unsigned int
+rte_cryptodev_is_valid_dev(uint8_t dev_id);
+
/**
* Provide driver identifier.
*
diff --git a/lib/cryptodev/rte_cryptodev_pmd.h b/lib/cryptodev/rte_cryptodev_pmd.h
index 1274436870..dd2a4940a2 100644
--- a/lib/cryptodev/rte_cryptodev_pmd.h
+++ b/lib/cryptodev/rte_cryptodev_pmd.h
@@ -94,17 +94,6 @@ rte_cryptodev_pmd_get_dev(uint8_t dev_id);
struct rte_cryptodev *
rte_cryptodev_pmd_get_named_dev(const char *name);
-/**
- * Validate if the crypto device index is valid attached crypto device.
- *
- * @param dev_id Crypto device index.
- *
- * @return
- * - If the device index is valid (1) or not (0).
- */
-unsigned int
-rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id);
-
/**
* The pool of rte_cryptodev structures.
*/
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index 979d823a7c..1a7f759c57 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -25,6 +25,7 @@ DPDK_22 {
rte_cryptodev_get_feature_name;
rte_cryptodev_get_sec_ctx;
rte_cryptodev_info_get;
+ rte_cryptodev_is_valid_dev;
rte_cryptodev_name_get;
rte_cryptodev_pmd_allocate;
rte_cryptodev_pmd_callback_process;
@@ -33,7 +34,6 @@ DPDK_22 {
rte_cryptodev_pmd_destroy;
rte_cryptodev_pmd_get_dev;
rte_cryptodev_pmd_get_named_dev;
- rte_cryptodev_pmd_is_valid_dev;
rte_cryptodev_pmd_parse_input_args;
rte_cryptodev_pmd_release_device;
rte_cryptodev_queue_pair_count;
diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c
index e1d38d383d..2d38389858 100644
--- a/lib/eventdev/rte_event_crypto_adapter.c
+++ b/lib/eventdev/rte_event_crypto_adapter.c
@@ -781,7 +781,7 @@ rte_event_crypto_adapter_queue_pair_add(uint8_t id,
EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
- if (!rte_cryptodev_pmd_is_valid_dev(cdev_id)) {
+ if (!rte_cryptodev_is_valid_dev(cdev_id)) {
RTE_EDEV_LOG_ERR("Invalid dev_id=%" PRIu8, cdev_id);
return -EINVAL;
}
@@ -898,7 +898,7 @@ rte_event_crypto_adapter_queue_pair_del(uint8_t id, uint8_t cdev_id,
EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
- if (!rte_cryptodev_pmd_is_valid_dev(cdev_id)) {
+ if (!rte_cryptodev_is_valid_dev(cdev_id)) {
RTE_EDEV_LOG_ERR("Invalid dev_id=%" PRIu8, cdev_id);
return -EINVAL;
}
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 594dd5e759..cb0ed7b620 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -165,7 +165,7 @@ rte_event_crypto_adapter_caps_get(uint8_t dev_id, uint8_t cdev_id,
struct rte_cryptodev *cdev;
RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
- if (!rte_cryptodev_pmd_is_valid_dev(cdev_id))
+ if (!rte_cryptodev_is_valid_dev(cdev_id))
return -EINVAL;
dev = &rte_eventdevs[dev_id];
diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
index 98f3438774..54721ed96a 100644
--- a/lib/pipeline/rte_table_action.c
+++ b/lib/pipeline/rte_table_action.c
@@ -1732,7 +1732,7 @@ struct sym_crypto_data {
static int
sym_crypto_cfg_check(struct rte_table_action_sym_crypto_config *cfg)
{
- if (!rte_cryptodev_pmd_is_valid_dev(cfg->cryptodev_id))
+ if (!rte_cryptodev_is_valid_dev(cfg->cryptodev_id))
return -EINVAL;
if (cfg->mp_create == NULL || cfg->mp_init == NULL)
return -EINVAL;
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [EXT] [PATCH] crypto/dpaa_sec: update release notes
@ 2021-09-07 19:15 3% ` Akhil Goyal
2021-09-08 7:02 0% ` Hemant Agrawal
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-09-07 19:15 UTC (permalink / raw)
To: Hemant Agrawal; +Cc: dev
> Update the release notes in DPAAx for the changes in recent patches.
>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
> doc/guides/rel_notes/release_21_11.rst | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index 0afd21812f..c747a0fb11 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -75,10 +75,14 @@ New Features
> * **Updated NXP dpaa2_sec crypto PMD.**
>
> * Added raw vector datapath API support
> + * Added PDCP short MAC-I support
>
> * **Updated NXP dpaa_sec crypto PMD.**
>
> * Added raw vector datapath API support
> + * Added PDCP short MAC-I support
> + * Added non-HMAC, DES, AES-XCBC and AES-CMAC algo support
> +
>
Please send a next version for each of the series including each of the
Item separately in the release notes.
For example, DES-CBC patch will have update in release notes only for it
Subsequently for others also.
Please also remove entry from deprecation notices and subsequently update
the release notes ABI changes section done in a particular patch.
I tried splitting it, but it would also need changes in deprecation notices.
Please rebase the 3 series in following order (as for 3rd series I am waiting for review from Intel)
1. Algo support in dpaaX
2. short MAC
3. raw crypto
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [RFC PATCH v5 1/5] sched: add PIE based congestion management
@ 2021-09-07 19:14 3% ` Stephen Hemminger
2021-09-08 8:49 3% ` Liguzinski, WojciechX
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-09-07 19:14 UTC (permalink / raw)
To: Liguzinski, WojciechX
Cc: dev, jasvinder.singh, cristian.dumitrescu, megha.ajmera
On Tue, 7 Sep 2021 07:33:24 +0000
"Liguzinski, WojciechX" <wojciechx.liguzinski@intel.com> wrote:
> +/**
> + * @brief make a decision to drop or enqueue a packet based on probability
> + * criteria
> + *
> + * @param pie_cfg [in] config pointer to a PIE configuration parameter structure
> + * @param pie [in, out] data pointer to PIE runtime data
> + * @param time [in] current time (measured in cpu cycles)
> + */
> +static inline void
> +__rte_experimental
> +_calc_drop_probability(const struct rte_pie_config *pie_cfg,
> + struct rte_pie *pie, uint64_t time)
This code adds a lot of inline functions in the name of performance.
But every inline like this means the internal ABI for the implmentation
has to be exposed.
You would probably get a bigger performance bump from not using floating
point in the internal math, than the minor performance optimization from
having so many inlines.
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v3 4/4] cryptodev: expose driver interface as internal
2021-09-07 19:00 3% ` [dpdk-dev] [PATCH v3 2/4] cryptodev: change valid dev API Akhil Goyal
@ 2021-09-07 19:00 2% ` Akhil Goyal
2 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-09-07 19:00 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, roy.fan.zhang, asomalap,
ruifeng.wang, ajit.khaparde, pablo.de.lara.guarch, fiona.trahe,
adwivedi, michaelsh, rnagadheeraj, jianjay.zhou, Akhil Goyal
The rte_cryptodev_pmd.* files are for drivers only and should be
private to DPDK, and not installed for app use.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 4 +++
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 2 +-
drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c | 2 +-
drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 2 +-
.../crypto/aesni_mb/rte_aesni_mb_pmd_ops.c | 2 +-
drivers/crypto/armv8/rte_armv8_pmd.c | 2 +-
drivers/crypto/armv8/rte_armv8_pmd_ops.c | 2 +-
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 2 +-
drivers/crypto/bcmfs/bcmfs_sym_session.h | 2 +-
drivers/crypto/caam_jr/caam_jr.c | 2 +-
drivers/crypto/ccp/ccp_crypto.c | 2 +-
drivers/crypto/ccp/ccp_pmd_ops.c | 2 +-
drivers/crypto/ccp/rte_ccp_pmd.c | 2 +-
drivers/crypto/cnxk/cn10k_cryptodev.c | 2 +-
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 2 +-
drivers/crypto/cnxk/cn10k_cryptodev_ops.h | 2 +-
drivers/crypto/cnxk/cn9k_cryptodev.c | 2 +-
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 2 +-
drivers/crypto/cnxk/cn9k_cryptodev_ops.h | 2 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 2 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 2 +-
drivers/crypto/dpaa_sec/dpaa_sec.c | 2 +-
drivers/crypto/kasumi/rte_kasumi_pmd.c | 2 +-
drivers/crypto/kasumi/rte_kasumi_pmd_ops.c | 2 +-
drivers/crypto/mlx5/mlx5_crypto.h | 2 +-
drivers/crypto/mvsam/rte_mrvl_pmd.c | 2 +-
drivers/crypto/mvsam/rte_mrvl_pmd_ops.c | 2 +-
drivers/crypto/nitrox/nitrox_sym.c | 2 +-
drivers/crypto/null/null_crypto_pmd.c | 2 +-
drivers/crypto/null/null_crypto_pmd_ops.c | 2 +-
drivers/crypto/octeontx/otx_cryptodev.c | 2 +-
drivers/crypto/octeontx/otx_cryptodev_ops.c | 2 +-
drivers/crypto/octeontx2/otx2_cryptodev.c | 2 +-
drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 2 +-
drivers/crypto/octeontx2/otx2_cryptodev_ops.h | 2 +-
drivers/crypto/openssl/rte_openssl_pmd.c | 2 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 2 +-
drivers/crypto/qat/qat_asym.h | 2 +-
drivers/crypto/qat/qat_asym_pmd.c | 2 +-
drivers/crypto/qat/qat_sym.h | 2 +-
drivers/crypto/qat/qat_sym_hw_dp.c | 2 +-
drivers/crypto/qat/qat_sym_pmd.c | 2 +-
drivers/crypto/qat/qat_sym_session.h | 2 +-
.../scheduler/rte_cryptodev_scheduler.c | 2 +-
drivers/crypto/scheduler/scheduler_pmd.c | 2 +-
drivers/crypto/scheduler/scheduler_pmd_ops.c | 2 +-
drivers/crypto/snow3g/rte_snow3g_pmd.c | 2 +-
drivers/crypto/snow3g/rte_snow3g_pmd_ops.c | 2 +-
drivers/crypto/virtio/virtio_cryptodev.c | 2 +-
drivers/crypto/virtio/virtio_rxtx.c | 2 +-
drivers/crypto/zuc/rte_zuc_pmd.c | 2 +-
drivers/crypto/zuc/rte_zuc_pmd_ops.c | 2 +-
.../octeontx2/otx2_evdev_crypto_adptr_rx.h | 2 +-
.../octeontx2/otx2_evdev_crypto_adptr_tx.h | 2 +-
.../net/softnic/rte_eth_softnic_cryptodev.c | 2 +-
.../{rte_cryptodev_pmd.c => cryptodev_pmd.c} | 2 +-
.../{rte_cryptodev_pmd.h => cryptodev_pmd.h} | 16 +++++++++---
lib/cryptodev/meson.build | 18 ++++++++++---
lib/cryptodev/rte_cryptodev.c | 2 +-
lib/cryptodev/version.map | 25 +++++++++++--------
lib/eventdev/rte_event_crypto_adapter.c | 2 +-
lib/eventdev/rte_eventdev.c | 2 +-
lib/pipeline/rte_table_action.c | 2 +-
63 files changed, 105 insertions(+), 76 deletions(-)
rename lib/cryptodev/{rte_cryptodev_pmd.c => cryptodev_pmd.c} (99%)
rename lib/cryptodev/{rte_cryptodev_pmd.h => cryptodev_pmd.h} (98%)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index e7ad50ba09..8785b25ff6 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -106,6 +106,10 @@ API Changes
rte_cryptodev_is_valid_dev as it can be used by the application as
well as PMD to check whether the device is valid or not.
+* cryptodev: The rte_cryptodev_pmd.* files are renamed as cryptodev_pmd.*
+ as it is for drivers only and should be private to DPDK, and not
+ installed for app use.
+
ABI Changes
-----------
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 886e2a5aaa..330aad8157 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -5,7 +5,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
#include <rte_cpuflags.h>
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
index 18dbc4c18c..edb7275e76 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "aesni_gcm_pmd_private.h"
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index a01c826a3c..60963a8208 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -7,7 +7,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
#include <rte_cpuflags.h>
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
index fc7fdfec8e..48a8f91868 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -8,7 +8,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
#include <rte_ether.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "aesni_mb_pmd_private.h"
diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c
index c642ac350f..36a1a9bb4f 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd.c
@@ -7,7 +7,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
#include <rte_cpuflags.h>
diff --git a/drivers/crypto/armv8/rte_armv8_pmd_ops.c b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
index 01ccfb4b23..1b2749fe62 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd_ops.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "armv8_pmd_private.h"
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index aa7fad6d70..d1dd22823e 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -7,7 +7,7 @@
#include <rte_dev.h>
#include <rte_errno.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "bcmfs_device.h"
#include "bcmfs_logs.h"
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.h b/drivers/crypto/bcmfs/bcmfs_sym_session.h
index 8240c6fc25..d40595b4bd 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_session.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.h
@@ -8,7 +8,7 @@
#include <stdbool.h>
#include <rte_crypto.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "bcmfs_sym_defs.h"
#include "bcmfs_sym_req.h"
diff --git a/drivers/crypto/caam_jr/caam_jr.c b/drivers/crypto/caam_jr/caam_jr.c
index 3fb3fe0f8a..258750afe7 100644
--- a/drivers/crypto/caam_jr/caam_jr.c
+++ b/drivers/crypto/caam_jr/caam_jr.c
@@ -9,7 +9,7 @@
#include <rte_byteorder.h>
#include <rte_common.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_crypto.h>
#include <rte_cryptodev.h>
#include <rte_bus_vdev.h>
diff --git a/drivers/crypto/ccp/ccp_crypto.c b/drivers/crypto/ccp/ccp_crypto.c
index f37d35f18f..70daed791e 100644
--- a/drivers/crypto/ccp/ccp_crypto.c
+++ b/drivers/crypto/ccp/ccp_crypto.c
@@ -20,7 +20,7 @@
#include <rte_memory.h>
#include <rte_spinlock.h>
#include <rte_string_fns.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "ccp_dev.h"
#include "ccp_crypto.h"
diff --git a/drivers/crypto/ccp/ccp_pmd_ops.c b/drivers/crypto/ccp/ccp_pmd_ops.c
index 98f964f361..0d615d311c 100644
--- a/drivers/crypto/ccp/ccp_pmd_ops.c
+++ b/drivers/crypto/ccp/ccp_pmd_ops.c
@@ -5,7 +5,7 @@
#include <string.h>
#include <rte_common.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_malloc.h>
#include "ccp_pmd_private.h"
diff --git a/drivers/crypto/ccp/rte_ccp_pmd.c b/drivers/crypto/ccp/rte_ccp_pmd.c
index ab9416942e..a54d81de46 100644
--- a/drivers/crypto/ccp/rte_ccp_pmd.c
+++ b/drivers/crypto/ccp/rte_ccp_pmd.c
@@ -7,7 +7,7 @@
#include <rte_bus_vdev.h>
#include <rte_common.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_pci.h>
#include <rte_dev.h>
#include <rte_malloc.h>
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev.c b/drivers/crypto/cnxk/cn10k_cryptodev.c
index db7b5aa7c6..012eb0c051 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_crypto.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_dev.h>
#include <rte_pci.h>
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index cccca77f60..3a1a4a2e29 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -3,7 +3,7 @@
*/
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_event_crypto_adapter.h>
#include <rte_ip.h>
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
index b03d2eee14..d7e9f87396 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
@@ -6,7 +6,7 @@
#define _CN10K_CRYPTODEV_OPS_H_
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
extern struct rte_cryptodev_ops cn10k_cpt_ops;
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev.c b/drivers/crypto/cnxk/cn9k_cryptodev.c
index e60b352fac..6b8cb01a12 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_crypto.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_dev.h>
#include <rte_pci.h>
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index 40109acc3f..75277936b0 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -3,7 +3,7 @@
*/
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_event_crypto_adapter.h>
#include <rte_ip.h>
#include <rte_vect.h>
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.h b/drivers/crypto/cnxk/cn9k_cryptodev_ops.h
index 1255de33ae..309f507346 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.h
@@ -5,7 +5,7 @@
#ifndef _CN9K_CRYPTODEV_OPS_H_
#define _CN9K_CRYPTODEV_OPS_H_
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
extern struct rte_cryptodev_ops cn9k_cpt_ops;
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index 957c78063f..41d8fe49e1 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -3,7 +3,7 @@
*/
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_errno.h>
#include "roc_cpt.h"
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 1ccead3641..bf69c61916 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -18,7 +18,7 @@
#include <rte_cycles.h>
#include <rte_kvargs.h>
#include <rte_dev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_common.h>
#include <rte_fslmc.h>
#include <fslmc_vfio.h>
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 19d4684e24..3d53746ef1 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -12,7 +12,7 @@
#include <rte_byteorder.h>
#include <rte_common.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_crypto.h>
#include <rte_cryptodev.h>
#ifdef RTE_LIB_SECURITY
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
index 48b7db9e9b..d6f927417a 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -5,7 +5,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
#include <rte_cpuflags.h>
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
index 43376c1aa0..f075054807 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "kasumi_pmd_private.h"
diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h
index 722acb8d19..d589e0ac3d 100644
--- a/drivers/crypto/mlx5/mlx5_crypto.h
+++ b/drivers/crypto/mlx5/mlx5_crypto.h
@@ -8,7 +8,7 @@
#include <stdbool.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <mlx5_common_utils.h>
#include <mlx5_common_devx.h>
diff --git a/drivers/crypto/mvsam/rte_mrvl_pmd.c b/drivers/crypto/mvsam/rte_mrvl_pmd.c
index ed85bb6547..a72642a772 100644
--- a/drivers/crypto/mvsam/rte_mrvl_pmd.c
+++ b/drivers/crypto/mvsam/rte_mrvl_pmd.c
@@ -7,7 +7,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_security_driver.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
diff --git a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
index fa36461276..3064b1f136 100644
--- a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
+++ b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
@@ -8,7 +8,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_security_driver.h>
#include "mrvl_pmd_private.h"
diff --git a/drivers/crypto/nitrox/nitrox_sym.c b/drivers/crypto/nitrox/nitrox_sym.c
index 2768bdd2e1..f8b7edcd69 100644
--- a/drivers/crypto/nitrox/nitrox_sym.c
+++ b/drivers/crypto/nitrox/nitrox_sym.c
@@ -4,7 +4,7 @@
#include <stdbool.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_crypto.h>
#include "nitrox_sym.h"
diff --git a/drivers/crypto/null/null_crypto_pmd.c b/drivers/crypto/null/null_crypto_pmd.c
index 179e5ff36d..f9935d52cc 100644
--- a/drivers/crypto/null/null_crypto_pmd.c
+++ b/drivers/crypto/null/null_crypto_pmd.c
@@ -3,7 +3,7 @@
*/
#include <rte_common.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
diff --git a/drivers/crypto/null/null_crypto_pmd_ops.c b/drivers/crypto/null/null_crypto_pmd_ops.c
index d67892a1bb..a8b5a06e7f 100644
--- a/drivers/crypto/null/null_crypto_pmd_ops.c
+++ b/drivers/crypto/null/null_crypto_pmd_ops.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "null_crypto_pmd_private.h"
diff --git a/drivers/crypto/octeontx/otx_cryptodev.c b/drivers/crypto/octeontx/otx_cryptodev.c
index 3822c0d779..c294f86d79 100644
--- a/drivers/crypto/octeontx/otx_cryptodev.c
+++ b/drivers/crypto/octeontx/otx_cryptodev.c
@@ -5,7 +5,7 @@
#include <rte_bus_pci.h>
#include <rte_common.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_log.h>
#include <rte_pci.h>
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
index eac6796cfb..9b5bde53f8 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
@@ -5,7 +5,7 @@
#include <rte_alarm.h>
#include <rte_bus_pci.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_eventdev.h>
#include <rte_event_crypto_adapter.h>
#include <rte_errno.h>
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev.c b/drivers/crypto/octeontx2/otx2_cryptodev.c
index 75fb4f9a3b..85b1f00263 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_crypto.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_dev.h>
#include <rte_errno.h>
#include <rte_mempool.h>
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 42100154cd..09ddbb5f34 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -4,7 +4,7 @@
#include <unistd.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_errno.h>
#include <rte_ethdev.h>
#include <rte_event_crypto_adapter.h>
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.h b/drivers/crypto/octeontx2/otx2_cryptodev_ops.h
index 1970187f88..8d135909b3 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.h
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.h
@@ -5,7 +5,7 @@
#ifndef _OTX2_CRYPTODEV_OPS_H_
#define _OTX2_CRYPTODEV_OPS_H_
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#define OTX2_CPT_MIN_HEADROOM_REQ 24
#define OTX2_CPT_MIN_TAILROOM_REQ 8
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 37b969b916..13c6ea8724 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -5,7 +5,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
#include <rte_cpuflags.h>
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index ed75877581..52715f86f8 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "openssl_pmd_private.h"
#include "compat.h"
diff --git a/drivers/crypto/qat/qat_asym.h b/drivers/crypto/qat/qat_asym.h
index 2838aee76f..308b6b2e0b 100644
--- a/drivers/crypto/qat/qat_asym.h
+++ b/drivers/crypto/qat/qat_asym.h
@@ -5,7 +5,7 @@
#ifndef _QAT_ASYM_H_
#define _QAT_ASYM_H_
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_crypto_asym.h>
#include "icp_qat_fw_pke.h"
#include "qat_common.h"
diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c
index 0c25cce09e..e91bb0d317 100644
--- a/drivers/crypto/qat/qat_asym_pmd.c
+++ b/drivers/crypto/qat/qat_asym_pmd.c
@@ -2,7 +2,7 @@
* Copyright(c) 2019 Intel Corporation
*/
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "qat_logs.h"
diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h
index 20b1b53d36..e3ec7f0de4 100644
--- a/drivers/crypto/qat/qat_sym.h
+++ b/drivers/crypto/qat/qat_sym.h
@@ -5,7 +5,7 @@
#ifndef _QAT_SYM_H_
#define _QAT_SYM_H_
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#ifdef RTE_LIB_SECURITY
#include <rte_net_crc.h>
#endif
diff --git a/drivers/crypto/qat/qat_sym_hw_dp.c b/drivers/crypto/qat/qat_sym_hw_dp.c
index ac9ac05363..36d11e0dc9 100644
--- a/drivers/crypto/qat/qat_sym_hw_dp.c
+++ b/drivers/crypto/qat/qat_sym_hw_dp.c
@@ -2,7 +2,7 @@
* Copyright(c) 2020 Intel Corporation
*/
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "adf_transport_access_macros.h"
#include "icp_qat_fw.h"
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index 6868e5f001..efda921c05 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -7,7 +7,7 @@
#include <rte_dev.h>
#include <rte_malloc.h>
#include <rte_pci.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#ifdef RTE_LIB_SECURITY
#include <rte_security_driver.h>
#endif
diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h
index 33b236e49b..6ebc176729 100644
--- a/drivers/crypto/qat/qat_sym_session.h
+++ b/drivers/crypto/qat/qat_sym_session.h
@@ -5,7 +5,7 @@
#define _QAT_SYM_SESSION_H_
#include <rte_crypto.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#ifdef RTE_LIB_SECURITY
#include <rte_security.h>
#endif
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
index 1e0b4df0ca..1e0c4fe464 100644
--- a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
@@ -4,7 +4,7 @@
#include <rte_string_fns.h>
#include <rte_reorder.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_malloc.h>
#include "rte_cryptodev_scheduler.h"
diff --git a/drivers/crypto/scheduler/scheduler_pmd.c b/drivers/crypto/scheduler/scheduler_pmd.c
index 632197833c..560c26af50 100644
--- a/drivers/crypto/scheduler/scheduler_pmd.c
+++ b/drivers/crypto/scheduler/scheduler_pmd.c
@@ -4,7 +4,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
#include <rte_cpuflags.h>
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
index cb125e8027..465b88ade8 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -7,7 +7,7 @@
#include <rte_malloc.h>
#include <rte_dev.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_reorder.h>
#include "scheduler_pmd_private.h"
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c
index 9aab357846..8284ac0b66 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c
@@ -5,7 +5,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
#include <rte_cpuflags.h>
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
index 906a0fe60b..3f46014b7d 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "snow3g_pmd_private.h"
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 4bae74a487..8faa39df4a 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -9,7 +9,7 @@
#include <rte_pci.h>
#include <rte_bus_pci.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_eal.h>
#include "virtio_cryptodev.h"
diff --git a/drivers/crypto/virtio/virtio_rxtx.c b/drivers/crypto/virtio/virtio_rxtx.c
index e1cb4ad104..a65524a306 100644
--- a/drivers/crypto/virtio/virtio_rxtx.c
+++ b/drivers/crypto/virtio/virtio_rxtx.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright(c) 2018 HUAWEI TECHNOLOGIES CO., LTD.
*/
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "virtqueue.h"
#include "virtio_cryptodev.h"
diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c
index 42b669f188..d4b343a7af 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd.c
@@ -5,7 +5,7 @@
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_bus_vdev.h>
#include <rte_malloc.h>
#include <rte_cpuflags.h>
diff --git a/drivers/crypto/zuc/rte_zuc_pmd_ops.c b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
index dc9dc25ef8..38642d45ab 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
@@ -6,7 +6,7 @@
#include <rte_common.h>
#include <rte_malloc.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "zuc_pmd_private.h"
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
index a543225376..b33cb7e139 100644
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
+++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
@@ -6,7 +6,7 @@
#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_eventdev.h>
#include "cpt_pmd_logs.h"
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
index ecf7eb9f56..1fc56f903b 100644
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
+++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
@@ -6,7 +6,7 @@
#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_event_crypto_adapter.h>
#include <rte_eventdev.h>
diff --git a/drivers/net/softnic/rte_eth_softnic_cryptodev.c b/drivers/net/softnic/rte_eth_softnic_cryptodev.c
index 8e278801c5..9a7d006f1a 100644
--- a/drivers/net/softnic/rte_eth_softnic_cryptodev.c
+++ b/drivers/net/softnic/rte_eth_softnic_cryptodev.c
@@ -6,7 +6,7 @@
#include <stdio.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_string_fns.h>
#include "rte_eth_softnic_internals.h"
diff --git a/lib/cryptodev/rte_cryptodev_pmd.c b/lib/cryptodev/cryptodev_pmd.c
similarity index 99%
rename from lib/cryptodev/rte_cryptodev_pmd.c
rename to lib/cryptodev/cryptodev_pmd.c
index e342daabc4..71e34140cd 100644
--- a/lib/cryptodev/rte_cryptodev_pmd.c
+++ b/lib/cryptodev/cryptodev_pmd.c
@@ -5,7 +5,7 @@
#include <rte_string_fns.h>
#include <rte_malloc.h>
-#include "rte_cryptodev_pmd.h"
+#include "cryptodev_pmd.h"
/**
* Parse name from argument
diff --git a/lib/cryptodev/rte_cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
similarity index 98%
rename from lib/cryptodev/rte_cryptodev_pmd.h
rename to lib/cryptodev/cryptodev_pmd.h
index dd2a4940a2..ec7bb82be8 100644
--- a/lib/cryptodev/rte_cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -2,8 +2,8 @@
* Copyright(c) 2015-2020 Intel Corporation.
*/
-#ifndef _RTE_CRYPTODEV_PMD_H_
-#define _RTE_CRYPTODEV_PMD_H_
+#ifndef _CRYPTODEV_PMD_H_
+#define _CRYPTODEV_PMD_H_
/** @file
* RTE Crypto PMD APIs
@@ -80,6 +80,7 @@ struct cryptodev_driver {
* @return
* - The rte_cryptodev structure pointer for the given device ID.
*/
+__rte_internal
struct rte_cryptodev *
rte_cryptodev_pmd_get_dev(uint8_t dev_id);
@@ -91,6 +92,7 @@ rte_cryptodev_pmd_get_dev(uint8_t dev_id);
* @return
* - The rte_cryptodev structure pointer for the given device ID.
*/
+__rte_internal
struct rte_cryptodev *
rte_cryptodev_pmd_get_named_dev(const char *name);
@@ -401,6 +403,7 @@ struct rte_cryptodev_ops {
* @return
* - Slot in the rte_dev_devices array for a new device;
*/
+__rte_internal
struct rte_cryptodev *
rte_cryptodev_pmd_allocate(const char *name, int socket_id);
@@ -414,6 +417,7 @@ rte_cryptodev_pmd_allocate(const char *name, int socket_id);
* @return
* - 0 on success, negative on error
*/
+__rte_internal
extern int
rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
@@ -435,6 +439,7 @@ rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
* - 0 on success
* - errno on failure
*/
+__rte_internal
int
rte_cryptodev_pmd_parse_input_args(
struct rte_cryptodev_pmd_init_params *params,
@@ -454,6 +459,7 @@ rte_cryptodev_pmd_parse_input_args(
* - crypto device instance on success
* - NULL on creation failure
*/
+__rte_internal
struct rte_cryptodev *
rte_cryptodev_pmd_create(const char *name,
struct rte_device *device,
@@ -471,6 +477,7 @@ rte_cryptodev_pmd_create(const char *name,
* - 0 on success
* - errno on failure
*/
+__rte_internal
int
rte_cryptodev_pmd_destroy(struct rte_cryptodev *cryptodev);
@@ -484,6 +491,7 @@ rte_cryptodev_pmd_destroy(struct rte_cryptodev *cryptodev);
* @return
* void
*/
+__rte_internal
void rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
enum rte_cryptodev_event_type event);
@@ -491,6 +499,7 @@ void rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
* @internal
* Create unique device name
*/
+__rte_internal
int
rte_cryptodev_pmd_create_dev_name(char *name, const char *dev_name_prefix);
@@ -506,6 +515,7 @@ rte_cryptodev_pmd_create_dev_name(char *name, const char *dev_name_prefix);
* @return
* The driver type identifier
*/
+__rte_internal
uint8_t rte_cryptodev_allocate_driver(struct cryptodev_driver *crypto_drv,
const struct rte_driver *drv);
@@ -555,4 +565,4 @@ set_asym_session_private_data(struct rte_cryptodev_asym_session *sess,
}
#endif
-#endif /* _RTE_CRYPTODEV_PMD_H_ */
+#endif /* _CRYPTODEV_PMD_H_ */
diff --git a/lib/cryptodev/meson.build b/lib/cryptodev/meson.build
index bec80beab3..735935df4a 100644
--- a/lib/cryptodev/meson.build
+++ b/lib/cryptodev/meson.build
@@ -1,12 +1,22 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2017-2019 Intel Corporation
-sources = files('rte_cryptodev.c', 'rte_cryptodev_pmd.c', 'cryptodev_trace_points.c')
-headers = files('rte_cryptodev.h',
- 'rte_cryptodev_pmd.h',
+sources = files(
+ 'cryptodev_pmd.c',
+ 'cryptodev_trace_points.c',
+ 'rte_cryptodev.c',
+)
+headers = files(
+ 'rte_cryptodev.h',
'rte_cryptodev_trace.h',
'rte_cryptodev_trace_fp.h',
'rte_crypto.h',
'rte_crypto_sym.h',
- 'rte_crypto_asym.h')
+ 'rte_crypto_asym.h',
+)
+
+driver_sdk_headers += files(
+ 'cryptodev_pmd.h',
+)
+
deps += ['kvargs', 'mbuf', 'rcu']
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index 37502b9b3c..9fa3aff1d3 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -39,7 +39,7 @@
#include "rte_crypto.h"
#include "rte_cryptodev.h"
-#include "rte_cryptodev_pmd.h"
+#include "cryptodev_pmd.h"
#include "rte_cryptodev_trace.h"
static uint8_t nb_drivers;
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index 1a7f759c57..8294c9f64f 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -8,7 +8,6 @@ DPDK_22 {
rte_crypto_cipher_algorithm_strings;
rte_crypto_cipher_operation_strings;
rte_crypto_op_pool_create;
- rte_cryptodev_allocate_driver;
rte_cryptodev_callback_register;
rte_cryptodev_callback_unregister;
rte_cryptodev_close;
@@ -27,15 +26,6 @@ DPDK_22 {
rte_cryptodev_info_get;
rte_cryptodev_is_valid_dev;
rte_cryptodev_name_get;
- rte_cryptodev_pmd_allocate;
- rte_cryptodev_pmd_callback_process;
- rte_cryptodev_pmd_create;
- rte_cryptodev_pmd_create_dev_name;
- rte_cryptodev_pmd_destroy;
- rte_cryptodev_pmd_get_dev;
- rte_cryptodev_pmd_get_named_dev;
- rte_cryptodev_pmd_parse_input_args;
- rte_cryptodev_pmd_release_device;
rte_cryptodev_queue_pair_count;
rte_cryptodev_queue_pair_setup;
rte_cryptodev_socket_id;
@@ -117,3 +107,18 @@ EXPERIMENTAL {
rte_cryptodev_remove_enq_callback;
};
+
+INTERNAL {
+ global:
+
+ rte_cryptodev_allocate_driver;
+ rte_cryptodev_pmd_allocate;
+ rte_cryptodev_pmd_callback_process;
+ rte_cryptodev_pmd_create;
+ rte_cryptodev_pmd_create_dev_name;
+ rte_cryptodev_pmd_destroy;
+ rte_cryptodev_pmd_get_dev;
+ rte_cryptodev_pmd_get_named_dev;
+ rte_cryptodev_pmd_parse_input_args;
+ rte_cryptodev_pmd_release_device;
+};
diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c
index 2d38389858..ebfc8326a8 100644
--- a/lib/eventdev/rte_event_crypto_adapter.c
+++ b/lib/eventdev/rte_event_crypto_adapter.c
@@ -9,7 +9,7 @@
#include <rte_dev.h>
#include <rte_errno.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_log.h>
#include <rte_malloc.h>
#include <rte_service_component.h>
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index cb0ed7b620..e347d6dfd5 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -31,7 +31,7 @@
#include <rte_errno.h>
#include <rte_ethdev.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include <rte_telemetry.h>
#include "rte_eventdev.h"
diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
index 54721ed96a..ad7904c0ee 100644
--- a/lib/pipeline/rte_table_action.c
+++ b/lib/pipeline/rte_table_action.c
@@ -16,7 +16,7 @@
#include <rte_udp.h>
#include <rte_vxlan.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
+#include <cryptodev_pmd.h>
#include "rte_table_action.h"
--
2.25.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v3 2/4] cryptodev: change valid dev API
@ 2021-09-07 19:00 3% ` Akhil Goyal
2021-09-07 19:00 2% ` [dpdk-dev] [PATCH v3 4/4] cryptodev: expose driver interface as internal Akhil Goyal
2 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-09-07 19:00 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, roy.fan.zhang, asomalap,
ruifeng.wang, ajit.khaparde, pablo.de.lara.guarch, fiona.trahe,
adwivedi, michaelsh, rnagadheeraj, jianjay.zhou, Akhil Goyal
The API rte_cryptodev_pmd_is_valid_dev, can be used
by the application as well as PMD to check whether
the device is valid or not. Hence, _pmd is removed
from the API.
The applications and drivers which use this API are
also updated.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
doc/guides/rel_notes/release_21_11.rst | 4 ++
.../net/softnic/rte_eth_softnic_cryptodev.c | 2 +-
examples/fips_validation/main.c | 2 +-
examples/ip_pipeline/cryptodev.c | 3 +-
lib/cryptodev/rte_cryptodev.c | 50 +++++++++----------
lib/cryptodev/rte_cryptodev.h | 11 ++++
lib/cryptodev/rte_cryptodev_pmd.h | 11 ----
lib/cryptodev/version.map | 2 +-
lib/eventdev/rte_event_crypto_adapter.c | 4 +-
lib/eventdev/rte_eventdev.c | 2 +-
lib/pipeline/rte_table_action.c | 2 +-
11 files changed, 48 insertions(+), 45 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 411fa9530a..e7ad50ba09 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -102,6 +102,10 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* cryptodev: The API rte_cryptodev_pmd_is_valid_dev is modified to
+ rte_cryptodev_is_valid_dev as it can be used by the application as
+ well as PMD to check whether the device is valid or not.
+
ABI Changes
-----------
diff --git a/drivers/net/softnic/rte_eth_softnic_cryptodev.c b/drivers/net/softnic/rte_eth_softnic_cryptodev.c
index a1a4ca5650..8e278801c5 100644
--- a/drivers/net/softnic/rte_eth_softnic_cryptodev.c
+++ b/drivers/net/softnic/rte_eth_softnic_cryptodev.c
@@ -82,7 +82,7 @@ softnic_cryptodev_create(struct pmd_internals *p,
dev_id = (uint32_t)status;
} else {
- if (rte_cryptodev_pmd_is_valid_dev(params->dev_id) == 0)
+ if (rte_cryptodev_is_valid_dev(params->dev_id) == 0)
return NULL;
dev_id = params->dev_id;
diff --git a/examples/fips_validation/main.c b/examples/fips_validation/main.c
index c175fe6ac2..e892078f0e 100644
--- a/examples/fips_validation/main.c
+++ b/examples/fips_validation/main.c
@@ -196,7 +196,7 @@ parse_cryptodev_id_arg(char *arg)
}
- if (!rte_cryptodev_pmd_is_valid_dev(cryptodev_id)) {
+ if (!rte_cryptodev_is_valid_dev(cryptodev_id)) {
RTE_LOG(ERR, USER1, "Error %i: invalid cryptodev id %s\n",
cryptodev_id, arg);
return -1;
diff --git a/examples/ip_pipeline/cryptodev.c b/examples/ip_pipeline/cryptodev.c
index b0d9f3d217..9997d97456 100644
--- a/examples/ip_pipeline/cryptodev.c
+++ b/examples/ip_pipeline/cryptodev.c
@@ -6,7 +6,6 @@
#include <stdio.h>
#include <rte_cryptodev.h>
-#include <rte_cryptodev_pmd.h>
#include <rte_string_fns.h>
#include "cryptodev.h"
@@ -74,7 +73,7 @@ cryptodev_create(const char *name, struct cryptodev_params *params)
dev_id = (uint32_t)status;
} else {
- if (rte_cryptodev_pmd_is_valid_dev(params->dev_id) == 0)
+ if (rte_cryptodev_is_valid_dev(params->dev_id) == 0)
return NULL;
dev_id = params->dev_id;
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index 447aa9d519..37502b9b3c 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -663,7 +663,7 @@ rte_cryptodev_is_valid_device_data(uint8_t dev_id)
}
unsigned int
-rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id)
+rte_cryptodev_is_valid_dev(uint8_t dev_id)
{
struct rte_cryptodev *dev = NULL;
@@ -761,7 +761,7 @@ rte_cryptodev_socket_id(uint8_t dev_id)
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+ if (!rte_cryptodev_is_valid_dev(dev_id))
return -1;
dev = rte_cryptodev_pmd_get_dev(dev_id);
@@ -1032,7 +1032,7 @@ rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
struct rte_cryptodev *dev;
int diag;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1080,7 +1080,7 @@ rte_cryptodev_start(uint8_t dev_id)
CDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1110,7 +1110,7 @@ rte_cryptodev_stop(uint8_t dev_id)
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return;
}
@@ -1136,7 +1136,7 @@ rte_cryptodev_close(uint8_t dev_id)
struct rte_cryptodev *dev;
int retval;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -1;
}
@@ -1176,7 +1176,7 @@ rte_cryptodev_get_qp_status(uint8_t dev_id, uint16_t queue_pair_id)
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1207,7 +1207,7 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1283,7 +1283,7 @@ rte_cryptodev_add_enq_callback(uint8_t dev_id,
return NULL;
}
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
rte_errno = ENODEV;
return NULL;
@@ -1349,7 +1349,7 @@ rte_cryptodev_remove_enq_callback(uint8_t dev_id,
return -EINVAL;
}
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
return -ENODEV;
}
@@ -1418,7 +1418,7 @@ rte_cryptodev_add_deq_callback(uint8_t dev_id,
return NULL;
}
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
rte_errno = ENODEV;
return NULL;
@@ -1484,7 +1484,7 @@ rte_cryptodev_remove_deq_callback(uint8_t dev_id,
return -EINVAL;
}
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
return -ENODEV;
}
@@ -1542,7 +1542,7 @@ rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
return -ENODEV;
}
@@ -1565,7 +1565,7 @@ rte_cryptodev_stats_reset(uint8_t dev_id)
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return;
}
@@ -1581,7 +1581,7 @@ rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
return;
}
@@ -1608,7 +1608,7 @@ rte_cryptodev_callback_register(uint8_t dev_id,
if (!cb_fn)
return -EINVAL;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1652,7 +1652,7 @@ rte_cryptodev_callback_unregister(uint8_t dev_id,
if (!cb_fn)
return -EINVAL;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1720,7 +1720,7 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
uint8_t index;
int ret;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1765,7 +1765,7 @@ rte_cryptodev_asym_session_init(uint8_t dev_id,
uint8_t index;
int ret;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1939,7 +1939,7 @@ rte_cryptodev_sym_session_clear(uint8_t dev_id,
struct rte_cryptodev *dev;
uint8_t driver_id;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -1969,7 +1969,7 @@ rte_cryptodev_asym_session_clear(uint8_t dev_id,
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
}
@@ -2079,7 +2079,7 @@ rte_cryptodev_sym_get_private_session_size(uint8_t dev_id)
struct rte_cryptodev *dev;
unsigned int priv_sess_size;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+ if (!rte_cryptodev_is_valid_dev(dev_id))
return 0;
dev = rte_cryptodev_pmd_get_dev(dev_id);
@@ -2099,7 +2099,7 @@ rte_cryptodev_asym_get_private_session_size(uint8_t dev_id)
unsigned int header_size = sizeof(void *) * nb_drivers;
unsigned int priv_sess_size;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+ if (!rte_cryptodev_is_valid_dev(dev_id))
return 0;
dev = rte_cryptodev_pmd_get_dev(dev_id);
@@ -2156,7 +2156,7 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
sym_crypto_fill_status(vec, EINVAL);
return 0;
}
@@ -2179,7 +2179,7 @@ rte_cryptodev_get_raw_dp_ctx_size(uint8_t dev_id)
int32_t size = sizeof(struct rte_crypto_raw_dp_ctx);
int32_t priv_size;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+ if (!rte_cryptodev_is_valid_dev(dev_id))
return -EINVAL;
dev = rte_cryptodev_pmd_get_dev(dev_id);
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 11f4e6fdbf..bb01f0f195 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -1368,6 +1368,17 @@ __rte_experimental
unsigned int
rte_cryptodev_asym_get_private_session_size(uint8_t dev_id);
+/**
+ * Validate if the crypto device index is valid attached crypto device.
+ *
+ * @param dev_id Crypto device index.
+ *
+ * @return
+ * - If the device index is valid (1) or not (0).
+ */
+unsigned int
+rte_cryptodev_is_valid_dev(uint8_t dev_id);
+
/**
* Provide driver identifier.
*
diff --git a/lib/cryptodev/rte_cryptodev_pmd.h b/lib/cryptodev/rte_cryptodev_pmd.h
index 1274436870..dd2a4940a2 100644
--- a/lib/cryptodev/rte_cryptodev_pmd.h
+++ b/lib/cryptodev/rte_cryptodev_pmd.h
@@ -94,17 +94,6 @@ rte_cryptodev_pmd_get_dev(uint8_t dev_id);
struct rte_cryptodev *
rte_cryptodev_pmd_get_named_dev(const char *name);
-/**
- * Validate if the crypto device index is valid attached crypto device.
- *
- * @param dev_id Crypto device index.
- *
- * @return
- * - If the device index is valid (1) or not (0).
- */
-unsigned int
-rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id);
-
/**
* The pool of rte_cryptodev structures.
*/
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index 979d823a7c..1a7f759c57 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -25,6 +25,7 @@ DPDK_22 {
rte_cryptodev_get_feature_name;
rte_cryptodev_get_sec_ctx;
rte_cryptodev_info_get;
+ rte_cryptodev_is_valid_dev;
rte_cryptodev_name_get;
rte_cryptodev_pmd_allocate;
rte_cryptodev_pmd_callback_process;
@@ -33,7 +34,6 @@ DPDK_22 {
rte_cryptodev_pmd_destroy;
rte_cryptodev_pmd_get_dev;
rte_cryptodev_pmd_get_named_dev;
- rte_cryptodev_pmd_is_valid_dev;
rte_cryptodev_pmd_parse_input_args;
rte_cryptodev_pmd_release_device;
rte_cryptodev_queue_pair_count;
diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c
index e1d38d383d..2d38389858 100644
--- a/lib/eventdev/rte_event_crypto_adapter.c
+++ b/lib/eventdev/rte_event_crypto_adapter.c
@@ -781,7 +781,7 @@ rte_event_crypto_adapter_queue_pair_add(uint8_t id,
EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
- if (!rte_cryptodev_pmd_is_valid_dev(cdev_id)) {
+ if (!rte_cryptodev_is_valid_dev(cdev_id)) {
RTE_EDEV_LOG_ERR("Invalid dev_id=%" PRIu8, cdev_id);
return -EINVAL;
}
@@ -898,7 +898,7 @@ rte_event_crypto_adapter_queue_pair_del(uint8_t id, uint8_t cdev_id,
EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
- if (!rte_cryptodev_pmd_is_valid_dev(cdev_id)) {
+ if (!rte_cryptodev_is_valid_dev(cdev_id)) {
RTE_EDEV_LOG_ERR("Invalid dev_id=%" PRIu8, cdev_id);
return -EINVAL;
}
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 594dd5e759..cb0ed7b620 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -165,7 +165,7 @@ rte_event_crypto_adapter_caps_get(uint8_t dev_id, uint8_t cdev_id,
struct rte_cryptodev *cdev;
RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
- if (!rte_cryptodev_pmd_is_valid_dev(cdev_id))
+ if (!rte_cryptodev_is_valid_dev(cdev_id))
return -EINVAL;
dev = &rte_eventdevs[dev_id];
diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
index 98f3438774..54721ed96a 100644
--- a/lib/pipeline/rte_table_action.c
+++ b/lib/pipeline/rte_table_action.c
@@ -1732,7 +1732,7 @@ struct sym_crypto_data {
static int
sym_crypto_cfg_check(struct rte_table_action_sym_crypto_config *cfg)
{
- if (!rte_cryptodev_pmd_is_valid_dev(cfg->cryptodev_id))
+ if (!rte_cryptodev_is_valid_dev(cfg->cryptodev_id))
return -EINVAL;
if (cfg->mp_create == NULL || cfg->mp_init == NULL)
return -EINVAL;
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v3] eventdev: update crypto adapter metadata structures
2021-09-07 10:37 0% ` Shijith Thotton
@ 2021-09-07 17:30 0% ` Gujjar, Abhinandan S
2021-09-08 7:42 0% ` Shijith Thotton
0 siblings, 1 reply; 200+ results
From: Gujjar, Abhinandan S @ 2021-09-07 17:30 UTC (permalink / raw)
To: Shijith Thotton, dev
Cc: Jerin Jacob Kollanukkaran, Anoob Joseph,
Pavan Nikhilesh Bhagavatula, Akhil Goyal, Ray Kinsella,
Ankur Dwivedi
Hi Shijith,
> -----Original Message-----
> From: Shijith Thotton <sthotton@marvell.com>
> Sent: Tuesday, September 7, 2021 4:07 PM
> To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; dev@dpdk.org
> Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Anoob Joseph
> <anoobj@marvell.com>; Pavan Nikhilesh Bhagavatula
> <pbhagavatula@marvell.com>; Akhil Goyal <gakhil@marvell.com>; Ray
> Kinsella <mdr@ashroe.eu>; Ankur Dwivedi <adwivedi@marvell.com>
> Subject: RE: [PATCH v3] eventdev: update crypto adapter metadata
> structures
>
> Hi Abhinandan,
>
> >> In crypto adapter metadata, reserved bytes in request info structure
> >> is a space holder for response info. It enforces an order of
> >> operation if the structures are updated using memcpy to avoid
> >> overwriting response info. It is logical to move the reserved space
> >> out of request info. It also solves the ordering issue mentioned before.
> >I would like to understand what kind of ordering issue you have faced
> >with the current approach. Could you please give an example/sequence
> and explain?
> >
>
> I have seen this issue with crypto adapter autotest (#n215).
>
> Example:
> rte_memcpy(&m_data.response_info, &response_info,
> sizeof(response_info)); rte_memcpy(&m_data.request_info,
> &request_info, sizeof(request_info));
>
> Here response info is getting overwritten by request info.
> Above lines can reordered to fix the issue, but can be ignored with this patch.
There is a reason for designing the metadata in this way.
Right now, sizeof (union rte_event_crypto_metadata) is 16 bytes.
So, the session based case needs just 16 bytes to store the data.
Whereas, for sessionless case each crypto_ops requires another 16 bytes.
By changing the struct in the following way you are doubling the memory requirement.
With the changes, for sessionless case, each crypto op requires 32 bytes of space instead of 16 bytes and the mempool will be bigger.
This will have the perf impact too!
You can just copy the individual members(cdev_id & queue_pair_id) after the response_info.
OR You have a better way?
>
> >>
> >> This patch removes the reserve field from request info and makes
> >> event crypto metadata type to structure from union to make space for
> >> response info.
> >>
> >> App and drivers are updated as per metadata change.
> >>
> >> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> >> Acked-by: Anoob Joseph <anoobj@marvell.com>
> >> ---
> >> v3:
> >> * Updated ABI section of release notes.
> >>
> >> v2:
> >> * Updated deprecation notice.
> >>
> >> v1:
> >> * Rebased.
> >>
> >> app/test/test_event_crypto_adapter.c | 14 +++++++-------
> >> doc/guides/rel_notes/deprecation.rst | 6 ------
> >> doc/guides/rel_notes/release_21_11.rst | 2 ++
> >> drivers/crypto/octeontx/otx_cryptodev_ops.c | 8 ++++----
> >> drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 4 ++--
> >> .../event/octeontx2/otx2_evdev_crypto_adptr_tx.h | 4 ++--
> >> lib/eventdev/rte_event_crypto_adapter.c | 8 ++++----
> >> lib/eventdev/rte_event_crypto_adapter.h | 15 +++++----------
> >> 8 files changed, 26 insertions(+), 35 deletions(-)
> >>
> >> diff --git a/app/test/test_event_crypto_adapter.c
> >> b/app/test/test_event_crypto_adapter.c
> >> index 3ad20921e2..0d73694d3a 100644
> >> --- a/app/test/test_event_crypto_adapter.c
> >> +++ b/app/test/test_event_crypto_adapter.c
> >> @@ -168,7 +168,7 @@ test_op_forward_mode(uint8_t session_less) {
> >> struct rte_crypto_sym_xform cipher_xform;
> >> struct rte_cryptodev_sym_session *sess;
> >> - union rte_event_crypto_metadata m_data;
> >> + struct rte_event_crypto_metadata m_data;
> >> struct rte_crypto_sym_op *sym_op;
> >> struct rte_crypto_op *op;
> >> struct rte_mbuf *m;
> >> @@ -368,7 +368,7 @@ test_op_new_mode(uint8_t session_less) {
> >> struct rte_crypto_sym_xform cipher_xform;
> >> struct rte_cryptodev_sym_session *sess;
> >> - union rte_event_crypto_metadata m_data;
> >> + struct rte_event_crypto_metadata m_data;
> >> struct rte_crypto_sym_op *sym_op;
> >> struct rte_crypto_op *op;
> >> struct rte_mbuf *m;
> >> @@ -406,7 +406,7 @@ test_op_new_mode(uint8_t session_less)
> >> if (cap &
> >> RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA) {
> >> /* Fill in private user data information */
> >> rte_memcpy(&m_data.response_info,
> &response_info,
> >> - sizeof(m_data));
> >> + sizeof(response_info));
> >> rte_cryptodev_sym_session_set_user_data(sess,
> >> &m_data, sizeof(m_data));
> >> }
> >> @@ -426,7 +426,7 @@ test_op_new_mode(uint8_t session_less)
> >> op->private_data_offset = len;
> >> /* Fill in private data information */
> >> rte_memcpy(&m_data.response_info, &response_info,
> >> - sizeof(m_data));
> >> + sizeof(response_info));
> >> rte_memcpy((uint8_t *)op + len, &m_data, sizeof(m_data));
> >> }
> >>
> >> @@ -519,7 +519,7 @@ configure_cryptodev(void)
> >> DEFAULT_NUM_XFORMS *
> >> sizeof(struct rte_crypto_sym_xform) +
> >> MAXIMUM_IV_LENGTH +
> >> - sizeof(union rte_event_crypto_metadata),
> >> + sizeof(struct rte_event_crypto_metadata),
> >> rte_socket_id());
> >> if (params.op_mpool == NULL) {
> >> RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
> @@ -549,12
> >> +549,12 @@ configure_cryptodev(void)
> >> * to include the session headers & private data
> >> */
> >> session_size =
> >> rte_cryptodev_sym_get_private_session_size(TEST_CDEV_ID);
> >> - session_size += sizeof(union rte_event_crypto_metadata);
> >> + session_size += sizeof(struct rte_event_crypto_metadata);
> >>
> >> params.session_mpool = rte_cryptodev_sym_session_pool_create(
> >> "CRYPTO_ADAPTER_SESSION_MP",
> >> MAX_NB_SESSIONS, 0, 0,
> >> - sizeof(union rte_event_crypto_metadata),
> >> + sizeof(struct rte_event_crypto_metadata),
> >> SOCKET_ID_ANY);
> >> TEST_ASSERT_NOT_NULL(params.session_mpool,
> >> "session mempool allocation failed\n"); diff --git
> >> a/doc/guides/rel_notes/deprecation.rst
> >> b/doc/guides/rel_notes/deprecation.rst
> >> index 76a4abfd6b..58ee95c020 100644
> >> --- a/doc/guides/rel_notes/deprecation.rst
> >> +++ b/doc/guides/rel_notes/deprecation.rst
> >> @@ -266,12 +266,6 @@ Deprecation Notices
> >> values to the function ``rte_event_eth_rx_adapter_queue_add`` using
> >> the structure ``rte_event_eth_rx_adapter_queue_add``.
> >>
> >> -* eventdev: Reserved bytes of ``rte_event_crypto_request`` is a
> >> space holder
> >> - for ``response_info``. Both should be decoupled for better clarity.
> >> - New space for ``response_info`` can be made by changing
> >> - ``rte_event_crypto_metadata`` type to structure from union.
> >> - This change is targeted for DPDK 21.11.
> >> -
> >> * metrics: The function ``rte_metrics_init`` will have a non-void return
> >> in order to notify errors instead of calling ``rte_exit``.
> >>
> >> diff --git a/doc/guides/rel_notes/release_21_11.rst
> >> b/doc/guides/rel_notes/release_21_11.rst
> >> index d707a554ef..ab76d5dd55 100644
> >> --- a/doc/guides/rel_notes/release_21_11.rst
> >> +++ b/doc/guides/rel_notes/release_21_11.rst
> >> @@ -100,6 +100,8 @@ ABI Changes
> >> Also, make sure to start the actual text at the margin.
> >>
> =======================================================
> >>
> >> +* eventdev: Modified type of ``union rte_event_crypto_metadata`` to
> >> +struct and
> >> + removed reserved bytes from ``struct rte_event_crypto_request``.
> >>
> >> Known Issues
> >> ------------
> >> diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c
> >> b/drivers/crypto/octeontx/otx_cryptodev_ops.c
> >> index eac6796cfb..c51be63146 100644
> >> --- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
> >> +++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
> >> @@ -710,17 +710,17 @@ submit_request_to_sso(struct ssows *ws,
> >> uintptr_t req,
> >> ssovf_store_pair(add_work, req, ws->grps[rsp_info->queue_id]); }
> >>
> >> -static inline union rte_event_crypto_metadata *
> >> +static inline struct rte_event_crypto_metadata *
> >> get_event_crypto_mdata(struct rte_crypto_op *op) {
> >> - union rte_event_crypto_metadata *ec_mdata;
> >> + struct rte_event_crypto_metadata *ec_mdata;
> >>
> >> if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
> >> ec_mdata = rte_cryptodev_sym_session_get_user_data(
> >> op->sym->session);
> >> else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
> >> op->private_data_offset)
> >> - ec_mdata = (union rte_event_crypto_metadata *)
> >> + ec_mdata = (struct rte_event_crypto_metadata *)
> >> ((uint8_t *)op + op->private_data_offset);
> >> else
> >> return NULL;
> >> @@ -731,7 +731,7 @@ get_event_crypto_mdata(struct rte_crypto_op
> *op)
> >> uint16_t __rte_hot otx_crypto_adapter_enqueue(void *port, struct
> >> rte_crypto_op *op) {
> >> - union rte_event_crypto_metadata *ec_mdata;
> >> + struct rte_event_crypto_metadata *ec_mdata;
> >> struct cpt_instance *instance;
> >> struct cpt_request_info *req;
> >> struct rte_event *rsp_info;
> >> diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> >> b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> >> index 42100154cd..952d1352f4 100644
> >> --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> >> +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> >> @@ -453,7 +453,7 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp
> *qp,
> >> struct rte_crypto_op *op,
> >> uint64_t cpt_inst_w7)
> >> {
> >> - union rte_event_crypto_metadata *m_data;
> >> + struct rte_event_crypto_metadata *m_data;
> >> union cpt_inst_s inst;
> >> uint64_t lmt_status;
> >>
> >> @@ -468,7 +468,7 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp
> *qp,
> >> }
> >> } else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
> >> op->private_data_offset) {
> >> - m_data = (union rte_event_crypto_metadata *)
> >> + m_data = (struct rte_event_crypto_metadata *)
> >> ((uint8_t *)op +
> >> op->private_data_offset);
> >> } else {
> >> diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> >> b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> >> index ecf7eb9f56..458e8306d7 100644
> >> --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> >> +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> >> @@ -16,7 +16,7 @@
> >> static inline uint16_t
> >> otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) {
> >> - union rte_event_crypto_metadata *m_data;
> >> + struct rte_event_crypto_metadata *m_data;
> >> struct rte_crypto_op *crypto_op;
> >> struct rte_cryptodev *cdev;
> >> struct otx2_cpt_qp *qp;
> >> @@ -37,7 +37,7 @@ otx2_ca_enq(uintptr_t tag_op, const struct
> >> rte_event
> >> *ev)
> >> qp_id = m_data->request_info.queue_pair_id;
> >> } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS
> &&
> >> crypto_op->private_data_offset) {
> >> - m_data = (union rte_event_crypto_metadata *)
> >> + m_data = (struct rte_event_crypto_metadata *)
> >> ((uint8_t *)crypto_op +
> >> crypto_op->private_data_offset);
> >> cdev_id = m_data->request_info.cdev_id; diff --git
> >> a/lib/eventdev/rte_event_crypto_adapter.c
> >> b/lib/eventdev/rte_event_crypto_adapter.c
> >> index e1d38d383d..6977391ae9 100644
> >> --- a/lib/eventdev/rte_event_crypto_adapter.c
> >> +++ b/lib/eventdev/rte_event_crypto_adapter.c
> >> @@ -333,7 +333,7 @@ eca_enq_to_cryptodev(struct
> >> rte_event_crypto_adapter *adapter,
> >> struct rte_event *ev, unsigned int cnt) {
> >> struct rte_event_crypto_adapter_stats *stats = &adapter-
> >> >crypto_stats;
> >> - union rte_event_crypto_metadata *m_data = NULL;
> >> + struct rte_event_crypto_metadata *m_data = NULL;
> >> struct crypto_queue_pair_info *qp_info = NULL;
> >> struct rte_crypto_op *crypto_op;
> >> unsigned int i, n;
> >> @@ -371,7 +371,7 @@ eca_enq_to_cryptodev(struct
> >> rte_event_crypto_adapter *adapter,
> >> len++;
> >> } else if (crypto_op->sess_type ==
> RTE_CRYPTO_OP_SESSIONLESS &&
> >> crypto_op->private_data_offset) {
> >> - m_data = (union rte_event_crypto_metadata *)
> >> + m_data = (struct rte_event_crypto_metadata *)
> >> ((uint8_t *)crypto_op +
> >> crypto_op->private_data_offset);
> >> cdev_id = m_data->request_info.cdev_id; @@ -
> >> 504,7 +504,7 @@ eca_ops_enqueue_burst(struct
> rte_event_crypto_adapter
> >> *adapter,
> >> struct rte_crypto_op **ops, uint16_t num) {
> >> struct rte_event_crypto_adapter_stats *stats = &adapter-
> >> >crypto_stats;
> >> - union rte_event_crypto_metadata *m_data = NULL;
> >> + struct rte_event_crypto_metadata *m_data = NULL;
> >> uint8_t event_dev_id = adapter->eventdev_id;
> >> uint8_t event_port_id = adapter->event_port_id;
> >> struct rte_event events[BATCH_SIZE]; @@ -523,7 +523,7 @@
> >> eca_ops_enqueue_burst(struct rte_event_crypto_adapter *adapter,
> >> ops[i]->sym->session);
> >> } else if (ops[i]->sess_type ==
> >> RTE_CRYPTO_OP_SESSIONLESS &&
> >> ops[i]->private_data_offset) {
> >> - m_data = (union rte_event_crypto_metadata *)
> >> + m_data = (struct rte_event_crypto_metadata *)
> >> ((uint8_t *)ops[i] +
> >> ops[i]->private_data_offset);
> >> }
> >> diff --git a/lib/eventdev/rte_event_crypto_adapter.h
> >> b/lib/eventdev/rte_event_crypto_adapter.h
> >> index f8c6cca87c..3c24d9d9df 100644
> >> --- a/lib/eventdev/rte_event_crypto_adapter.h
> >> +++ b/lib/eventdev/rte_event_crypto_adapter.h
> >> @@ -200,11 +200,6 @@ enum rte_event_crypto_adapter_mode {
> >> * provide event request information to the adapter.
> >> */
> >> struct rte_event_crypto_request {
> >> - uint8_t resv[8];
> >> - /**< Overlaps with first 8 bytes of struct rte_event
> >> - * that encode the response event information. Application
> >> - * is expected to fill in struct rte_event response_info.
> >> - */
> >> uint16_t cdev_id;
> >> /**< cryptodev ID to be used */
> >> uint16_t queue_pair_id;
> >> @@ -223,16 +218,16 @@ struct rte_event_crypto_request {
> >> * operation. If the transfer is done by SW, event response information
> >> * will be used by the adapter.
> >> */
> >> -union rte_event_crypto_metadata {
> >> - struct rte_event_crypto_request request_info;
> >> - /**< Request information to be filled in by application
> >> - * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
> >> - */
> >> +struct rte_event_crypto_metadata {
> >> struct rte_event response_info;
> >> /**< Response information to be filled in by application
> >> * for RTE_EVENT_CRYPTO_ADAPTER_OP_NEW and
> >> * RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
> >> */
> >> + struct rte_event_crypto_request request_info;
> >> + /**< Request information to be filled in by application
> >> + * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
> >> + */
> >> };
> >>
> >> /**
> >> --
> >> 2.25.1
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2 1/6] security: add SA lifetime configuration
@ 2021-09-07 16:32 8% ` Anoob Joseph
0 siblings, 0 replies; 200+ results
From: Anoob Joseph @ 2021-09-07 16:32 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty, Fan Zhang, Konstantin Ananyev
Cc: Anoob Joseph, Jerin Jacob, Archana Muniganti, Tejasree Kondoj,
Hemant Agrawal, Radu Nicolau, Ciara Power, Gagandeep Singh, dev
Add SA lifetime configuration to register soft and hard expiry limits.
Expiry can be in units of number of packets or bytes. Crypto op
status is also updated to include new field, aux_flags, which can be
used to indicate cases such as soft expiry in case of lookaside
protocol operations.
In case of soft expiry, the packets are successfully IPsec processed but
the soft expiry would indicate that SA needs to be reconfigured. For
inline protocol capable ethdev, this would result in an eth event while
for lookaside protocol capable cryptodev, this can be communicated via
`rte_crypto_op.aux_flags` field.
In case of hard expiry, the packets will not be IPsec processed and
would result in error.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
---
.../test_cryptodev_security_ipsec_test_vectors.h | 3 ---
doc/guides/rel_notes/deprecation.rst | 5 ----
doc/guides/rel_notes/release_21_11.rst | 13 ++++++++++
examples/ipsec-secgw/ipsec.c | 2 +-
examples/ipsec-secgw/ipsec.h | 2 +-
lib/cryptodev/rte_crypto.h | 18 +++++++++++++-
lib/security/rte_security.h | 28 ++++++++++++++++++++--
7 files changed, 58 insertions(+), 13 deletions(-)
diff --git a/app/test/test_cryptodev_security_ipsec_test_vectors.h b/app/test/test_cryptodev_security_ipsec_test_vectors.h
index ae9cd24..38ea43d 100644
--- a/app/test/test_cryptodev_security_ipsec_test_vectors.h
+++ b/app/test/test_cryptodev_security_ipsec_test_vectors.h
@@ -98,7 +98,6 @@ struct ipsec_test_data pkt_aes_128_gcm = {
.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
- .esn_soft_limit = 0,
.replay_win_sz = 0,
},
@@ -195,7 +194,6 @@ struct ipsec_test_data pkt_aes_192_gcm = {
.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
- .esn_soft_limit = 0,
.replay_win_sz = 0,
},
@@ -295,7 +293,6 @@ struct ipsec_test_data pkt_aes_256_gcm = {
.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
- .esn_soft_limit = 0,
.replay_win_sz = 0,
},
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abf..6118f06 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -282,8 +282,3 @@ Deprecation Notices
* security: The functions ``rte_security_set_pkt_metadata`` and
``rte_security_get_userdata`` will be made inline functions and additional
flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
-
-* cryptodev: The structure ``rte_crypto_op`` would be updated to reduce
- reserved bytes to 2 (from 3), and use 1 byte to indicate warnings and other
- information from the crypto/security operation. This field will be used to
- communicate events such as soft expiry with IPsec in lookaside mode.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 9b14c84..0e3ed28 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -102,6 +102,13 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* cryptodev: use 1 reserved byte from ``rte_crypto_op`` for aux flags
+
+ * Updated the structure ``rte_crypto_op`` to reduce reserved bytes to
+ 2 (from 3), and use 1 byte to indicate warnings and other information from
+ the crypto/security operation. This field will be used to communicate events
+ such as soft expiry with IPsec in lookaside mode.
+
ABI Changes
-----------
@@ -123,6 +130,12 @@ ABI Changes
* Added IPsec SA option to disable IV generation to allow known vector
tests as well as usage of application provided IV on supported PMDs.
+* security: add IPsec SA lifetime configuration
+
+ * Added IPsec SA lifetime configuration to allow applications to configure
+ soft and hard SA expiry limits. Limits can be either in units of packets or
+ bytes.
+
Known Issues
------------
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index 5b032fe..4868294 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -49,7 +49,7 @@ set_ipsec_conf(struct ipsec_sa *sa, struct rte_security_ipsec_xform *ipsec)
}
/* TODO support for Transport */
}
- ipsec->esn_soft_limit = IPSEC_OFFLOAD_ESN_SOFTLIMIT;
+ ipsec->life.packets_soft_limit = IPSEC_OFFLOAD_PKTS_SOFTLIMIT;
ipsec->replay_win_sz = app_sa_prm.window_size;
ipsec->options.esn = app_sa_prm.enable_esn;
ipsec->options.udp_encap = sa->udp_encap;
diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
index ae5058d..90c81c1 100644
--- a/examples/ipsec-secgw/ipsec.h
+++ b/examples/ipsec-secgw/ipsec.h
@@ -23,7 +23,7 @@
#define MAX_DIGEST_SIZE 32 /* Bytes -- 256 bits */
-#define IPSEC_OFFLOAD_ESN_SOFTLIMIT 0xffffff00
+#define IPSEC_OFFLOAD_PKTS_SOFTLIMIT 0xffffff00
#define IV_OFFSET (sizeof(struct rte_crypto_op) + \
sizeof(struct rte_crypto_sym_op))
diff --git a/lib/cryptodev/rte_crypto.h b/lib/cryptodev/rte_crypto.h
index fd5ef3a..d602183 100644
--- a/lib/cryptodev/rte_crypto.h
+++ b/lib/cryptodev/rte_crypto.h
@@ -66,6 +66,17 @@ enum rte_crypto_op_sess_type {
};
/**
+ * Auxiliary flags to indicate additional info from the operation
+ */
+
+/**
+ * Auxiliary flags related to IPsec offload with RTE_SECURITY
+ */
+
+#define RTE_CRYPTO_OP_AUX_FLAGS_IPSEC_SOFT_EXPIRY (1 << 0)
+/**< SA soft expiry limit has been reached */
+
+/**
* Cryptographic Operation.
*
* This structure contains data relating to performing cryptographic
@@ -93,7 +104,12 @@ struct rte_crypto_op {
*/
uint8_t sess_type;
/**< operation session type */
- uint8_t reserved[3];
+ uint8_t aux_flags;
+ /**< Operation specific auxiliary/additional flags.
+ * These flags carry additional information from the
+ * operation. Processing of the same is optional.
+ */
+ uint8_t reserved[2];
/**< Reserved bytes to fill 64 bits for
* future additions
*/
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index b4b6776..95c169d 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -206,6 +206,30 @@ enum rte_security_ipsec_sa_direction {
};
/**
+ * Configure soft and hard lifetime of an IPsec SA
+ *
+ * Lifetime of an IPsec SA would specify the maximum number of packets or bytes
+ * that can be processed. IPsec operations would start failing once any hard
+ * limit is reached.
+ *
+ * Soft limits can be specified to generate notification when the SA is
+ * approaching hard limits for lifetime. For inline operations, reaching soft
+ * expiry limit would result in raising an eth event for the same. For lookaside
+ * operations, this would result in a warning returned in
+ * ``rte_crypto_op.aux_flags``.
+ */
+struct rte_security_ipsec_lifetime {
+ uint64_t packets_soft_limit;
+ /**< Soft expiry limit in number of packets */
+ uint64_t bytes_soft_limit;
+ /**< Soft expiry limit in bytes */
+ uint64_t packets_hard_limit;
+ /**< Soft expiry limit in number of packets */
+ uint64_t bytes_hard_limit;
+ /**< Soft expiry limit in bytes */
+};
+
+/**
* IPsec security association configuration data.
*
* This structure contains data required to create an IPsec SA security session.
@@ -225,8 +249,8 @@ struct rte_security_ipsec_xform {
/**< IPsec SA Mode - transport/tunnel */
struct rte_security_ipsec_tunnel_param tunnel;
/**< Tunnel parameters, NULL for transport mode */
- uint64_t esn_soft_limit;
- /**< ESN for which the overflow event need to be raised */
+ struct rte_security_ipsec_lifetime life;
+ /**< IPsec SA lifetime */
uint32_t replay_win_sz;
/**< Anti replay window size to enable sequence replay attack handling.
* replay checking is disabled if the window size is 0.
--
2.7.4
^ permalink raw reply [relevance 8%]
* [dpdk-dev] [PATCH v3 1/3] security: support user specified IV
2021-09-07 16:17 3% ` [dpdk-dev] [PATCH v3 0/3] Add user specified IV with lookaside IPsec Anoob Joseph
@ 2021-09-07 16:17 5% ` Anoob Joseph
0 siblings, 0 replies; 200+ results
From: Anoob Joseph @ 2021-09-07 16:17 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty, Fan Zhang, Konstantin Ananyev
Cc: Anoob Joseph, Jerin Jacob, Archana Muniganti, Tejasree Kondoj,
Hemant Agrawal, Radu Nicolau, Ciara Power, Gagandeep Singh, dev
Enable user to provide IV to be used per security operation. This
would be used with lookaside protocol offload for comparing
against known vectors.
By default, PMD would generate IV internally and would be random.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
doc/guides/rel_notes/release_21_11.rst | 5 +++++
lib/security/rte_security.h | 14 ++++++++++++++
2 files changed, 19 insertions(+)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 411fa95..9b14c84 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -118,6 +118,11 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* security: add IPsec SA option to disable IV generation
+
+ * Added IPsec SA option to disable IV generation to allow known vector
+ tests as well as usage of application provided IV on supported PMDs.
+
Known Issues
------------
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 88d31de..b4b6776 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -181,6 +181,20 @@ struct rte_security_ipsec_sa_options {
* * 0: Disable per session security statistics collection for this SA.
*/
uint32_t stats : 1;
+
+ /** Disable IV generation in PMD
+ *
+ * * 1: Disable IV generation in PMD. When disabled, IV provided in
+ * rte_crypto_op will be used by the PMD.
+ *
+ * * 0: Enable IV generation in PMD. When enabled, PMD generated random
+ * value would be used and application is not required to provide
+ * IV.
+ *
+ * Note: For inline cases, IV generation would always need to be handled
+ * by the PMD.
+ */
+ uint32_t iv_gen_disable : 1;
};
/** IPSec security association direction */
--
2.7.4
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v3 0/3] Add user specified IV with lookaside IPsec
2021-09-06 14:58 4% ` [dpdk-dev] [PATCH v2 1/3] security: support user specified IV Anoob Joseph
@ 2021-09-07 16:17 3% ` Anoob Joseph
2021-09-07 16:17 5% ` [dpdk-dev] [PATCH v3 1/3] security: support user specified IV Anoob Joseph
1 sibling, 1 reply; 200+ results
From: Anoob Joseph @ 2021-09-07 16:17 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty, Fan Zhang, Konstantin Ananyev
Cc: Anoob Joseph, Jerin Jacob, Archana Muniganti, Tejasree Kondoj,
Hemant Agrawal, Radu Nicolau, Ciara Power, Gagandeep Singh, dev
Add support for using user provided IV with lookaside protocol (IPsec). Using
this option, application can provide IV to be used per operation. This
option can be used for knownn vector tests (which is otherwise impossible
due to random nature of IV) as well as if application wishes to use its
own random generator source.
Depends on
http://patches.dpdk.org/project/dpdk/list/?series=18642
Changes in v3:
- Moved release notes update to ABI section instead of API section
Changes in v2:
- Updated crypto/cnxk patch to handle non-aes-gcm cases
- Rebased on v3 of lookaside IPsec tests
Anoob Joseph (2):
security: support user specified IV
test/crypto: add outbound known vector tests
Tejasree Kondoj (1):
crypto/cnxk: add IV in SA in lookaside IPsec debug mode
app/test/test_cryptodev.c | 44 +++++++++++++++++++++++
app/test/test_cryptodev_security_ipsec.c | 16 ++++++++-
doc/guides/rel_notes/release_21_11.rst | 5 +++
drivers/crypto/cnxk/cn10k_ipsec.c | 16 +++++++++
drivers/crypto/cnxk/cn10k_ipsec.h | 2 ++
drivers/crypto/cnxk/cn10k_ipsec_la_ops.h | 44 +++++++++++++++++++++++
drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c | 29 +++++++++++++--
drivers/crypto/cnxk/meson.build | 6 ++++
lib/security/rte_security.h | 14 ++++++++
9 files changed, 173 insertions(+), 3 deletions(-)
--
2.7.4
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] doc: announce change in vfio dma mapping
2021-09-07 15:21 0% ` Burakov, Anatoly
@ 2021-09-07 16:08 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-09-07 16:08 UTC (permalink / raw)
To: Burakov, Anatoly, Ding, Xuan, Kinsella, Ray, dev
Cc: maxime.coquelin, Xia, Chenbo, Hu, Jiayu, Richardson, Bruce,
Thomas Monjalon
On 9/7/2021 4:21 PM, Burakov, Anatoly wrote:
> On 06-Sep-21 2:43 PM, Ferruh Yigit wrote:
>> On 9/6/2021 9:51 AM, Ding, Xuan wrote:
>>> Hi,
>>>
>>>> -----Original Message-----
>>>> From: Kinsella, Ray <mdr@ashroe.eu>
>>>> Sent: Friday, September 3, 2021 12:13 AM
>>>> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Burakov, Anatoly
>>>> <anatoly.burakov@intel.com>; Ding, Xuan <xuan.ding@intel.com>;
>>>> dev@dpdk.org
>>>> Cc: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>; Hu,
>>>> Jiayu <jiayu.hu@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
>>>> Thomas Monjalon <thomas@monjalon.net>
>>>> Subject: Re: [PATCH] doc: announce change in vfio dma mapping
>>>>
>>>>
>>>>
>>>> On 02/09/2021 10:50, Ferruh Yigit wrote:
>>>>> On 9/1/2021 2:25 PM, Burakov, Anatoly wrote:
>>>>>> On 01-Sep-21 12:42 PM, Ferruh Yigit wrote:
>>>>>>> On 9/1/2021 12:01 PM, Burakov, Anatoly wrote:
>>>>>>>> On 01-Sep-21 10:56 AM, Ferruh Yigit wrote:
>>>>>>>>> On 9/1/2021 2:41 AM, Ding, Xuan wrote:
>>>>>>>>>> Hi Ferruh,
>>>>>>>>>>
>>>>>>>>>>> -----Original Message-----
>>>>>>>>>>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>>>>>>>>>>> Sent: Wednesday, September 1, 2021 12:01 AM
>>>>>>>>>>> To: Ding, Xuan <xuan.ding@intel.com>; dev@dpdk.org; Burakov,
>>>> Anatoly
>>>>>>>>>>> <anatoly.burakov@intel.com>
>>>>>>>>>>> Cc: maxime.coquelin@redhat.com; Xia, Chenbo
>>>> <chenbo.xia@intel.com>; Hu,
>>>>>>>>>>> Jiayu <jiayu.hu@intel.com>; Richardson, Bruce
>>>> <bruce.richardson@intel.com>
>>>>>>>>>>> Subject: Re: [PATCH] doc: announce change in vfio dma mapping
>>>>>>>>>>>
>>>>>>>>>>> On 8/31/2021 2:10 PM, Xuan Ding wrote:
>>>>>>>>>>>> Currently, the VFIO subsystem will compact adjacent DMA regions for
>>>> the
>>>>>>>>>>>> purposes of saving space in the internal list of mappings. This has a
>>>>>>>>>>>> side effect of compacting two separate mappings that just happen to
>>>> be
>>>>>>>>>>>> adjacent in memory. Since VFIO implementation on IA platforms also
>>>> does
>>>>>>>>>>>> not allow partial unmapping of memory mapped for DMA, the current
>>>>>>>>>>> DPDK
>>>>>>>>>>>> VFIO implementation will prevent unmapping of accidentally adjacent
>>>>>>>>>>>> maps even though it could have been unmapped [1].
>>>>>>>>>>>>
>>>>>>>>>>>> The proper fix for this issue is to change the VFIO DMA mapping API to
>>>>>>>>>>>> also include page size, and always map memory page-by-page.
>>>>>>>>>>>>
>>>>>>>>>>>> [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
>>>>>>>>>>>>
>>>>>>>>>>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
>>>>>>>>>>>> ---
>>>>>>>>>>>> doc/guides/rel_notes/deprecation.rst | 3 +++
>>>>>>>>>>>> 1 file changed, 3 insertions(+)
>>>>>>>>>>>>
>>>>>>>>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>>>>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>>>>>>>>> index 76a4abfd6b..1234420caf 100644
>>>>>>>>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>>>>>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>>>>>>>>> @@ -287,3 +287,6 @@ Deprecation Notices
>>>>>>>>>>>> reserved bytes to 2 (from 3), and use 1 byte to indicate warnings
>>>> and
>>>>>>>>>>> other
>>>>>>>>>>>> information from the crypto/security operation. This field
>>>>>>>>>>>> will be
>>>>>>>>>>>> used to
>>>>>>>>>>>> communicate events such as soft expiry with IPsec in lookaside
>>>> mode.
>>>>>>>>>>>> +
>>>>>>>>>>>> +* vfio: the functions `rte_vfio_container_dma_map` will be amended
>>>> to
>>>>>>>>>>>> + include page size. This change is targeted for DPDK 22.02.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Is this means adding a new parameter to API?
>>>>>>>>>>> If so this is an ABI/API break and we can't do this change in the 22.02.
>>>>>>>>>>
>>>>>>>>>> Our original plan is add a new parameter in order not to use a new
>>>> function
>>>>>>>>>> name, so you mean, any changes to the API can only be done in the LTS
>>>> version?
>>>>>>>>>> If so, we can only add a new API and retire the old one in 22.11.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> We can add a new API anytime. Adding new parameter to an existing API
>>>> can be
>>>>>>>>> done on the ABI break release.
>>>>>>>>
>>>>>>>> So, 22.11 then?
>>>>>>>>
>>>>>>>
>>>>>>> Yes.
>>>>>>>
>>>>>>>>>
>>>>>>>>> You can add the new API in this release, and start using it.
>>>>>>>>> And mark the old API as deprecated in this release. This lets existing
>>>> binaries
>>>>>>>>> to keep using it, but app needs to switch to new API for compilation.
>>>>>>>>> Old API can be removed on 22.11, and you will need a deprecation notice
>>>> before
>>>>>>>>> 22.11 for it.
>>>>>>>>>
>>>>>>>>> Is above plan works for you?
>>>>>>>>>
>>>>>>>>
>>>>>>>> We have slightly rethought our approach, and the functionality that Xuan
>>>>>>>> requires does not rely on this API. They can, for all intents and
>>>>>>>> purposes, be
>>>>>>>> considered unrelated issues.
>>>>>>>>
>>>>>>>> I still think it's a good idea to update the API that way, so I would
>>>>>>>> like to do
>>>>>>>> that, and if we have to wait until 22.11 to fix it, I'm OK with that. Since
>>>>>>>> there no longer is any urgency here, it's acceptable to wait for the
>>>>>>>> next LTS
>>>> to
>>>>>>>> break it.
>>>>>>>>
>>>>>>>
>>>>>>> Got it.
>>>>>>>
>>>>>>> As far as I understand, main motivation in techboard decision was to
>>>> prevent the
>>>>>>> ABI break as much as possible (main reason of decision wasn't deprecation
>>>> notice
>>>>>>> being late). But if the correct thing to do is to rename the API (and
>>>>>>> break the
>>>>>>> ABI), I don't see the benefit to wait one more year, it is just delaying the
>>>>>>> impact and adding overhead to us.
>>>>>>> I am for being pragmatic and doing the change in this release if API rename
>>>> is
>>>>>>> better option, perhaps we can visit the issue again in techboard.
>>>>>>>
>>>>>>> Can you please describe why renaming API is better option, comparing to
>>>> adding
>>>>>>> new API with new parameter?
>>>>>>
>>>>>> I take it you meant "why renaming API *isn't* a better option".
>>>>>>
>>>>>> The problem we're solving is that the API in question does not know about
>>>> page
>>>>>> sizes and thus can't map segments page-by-page. I mean I /guess/ we could
>>>> have
>>>>>> two API's (one paged, one not paged), but then we get into all kinds of hairy
>>>>>> things about the API leaking the details of underlying platform.
>>>>>>
>>>>>> Bottom line: i like current API function name. It's concise, it's
>>>>>> descriptive.
>>>>>> It's only missing a parameter, which i would like to add. A rename has been
>>>>>> suggested (deprecate old API, add new API with a different name, and with
>>>> added
>>>>>> parameter), but honestly, I don't see why we have to do that because this is
>>>>>> predicated upon the assumption that we *can't* break ABI at all, under any
>>>>>> circumstances.
>>>>>>
>>>>>> Can you please explain to me what is wrong with keeping a versioned symbol?
>>>>>> Like, keep the old function around to keep ABI compatibility, but break the
>>>> API
>>>>>> compatibility for those who target 22.02 or later? That's what symbol
>>>> versioning
>>>>>> is *for*, is it not?
>>>>>>
>>>>>
>>>>> Nothing wrong with symbol versioning, indeed that is preferred way if it works
>>>>> for you, I didn't get that symbol versioning is planned.
>>>>>
>>>>> @Ray,
>>>>> Since symbol versioning is planned, ABI won't break, but API will change, can
>>>>> this change be done in this release without deprecation notice?
>>>>
>>>> Yes - I would think so.
>>>> Since we are going to the effort of using symbol versioning nothing is being
>>>> depreciated as such (yet).
>>>>
>>>>> Later we can have a deprecation notice to remove old symbol on 22.11.
>>>
>>> Thanks for your explanation.
>>> @Yigit, Ferruh Does it mean that we can do API change in 21.11? If so, we will
>>> follow the process and target API change in this release. :)
>>>
>>
>> With symbol versioning, yes you can make the change in this release.
>>
>> You can send another deprecation notice to remove the old symbol for 22.11, that
>> can be sent anytime until 22.08 released.
>>
>
> Hi Ferruh and others,
>
> We have decided to switch gears somewhat :)
>
> The original intent was to ensure that two adjacent segments can be freely
> mapped and unmapped. The easiest solution was page-by-page mapping, so that's
> where the API change idea came from.
>
> However, due to tech board decision to not grant the exception at the time, we
> have figured out a way to avoid the API change. The API change is still a good
> idea because mapping things page-by-page is a valid use case that is currently
> not covered by the API, so the next plan was to do the API change in a later
> release as a separate issue, not related to original Xuan's intent.
>
> However, we've been discussing implementation details with Xuan, and we arrived
> at a realization that what Xuan wants to implement not only does not *require*
> page size, it *cannot* be implemented with page size API because there's no way
> to know page size for that memory at the time of the API call. So, turns out we
> need both paged and page-less versions :)
>
> This means that there are no deprecation notices now, because we will be adding
> a new API after all, but we will *not* deprecate or remove the old API - that
> one will still stay valid.
>
> Apologies for constantly shifting ground, but in the end i think we arrived at
> the best possible solution for this problem!
>
So there won't be symbol versioning but only new API, which means no deprecation
notice is required, please update this patch's status accordingly.
Thanks for keep working on the issue to find a better solution.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] doc: announce change in vfio dma mapping
2021-09-06 13:43 0% ` Ferruh Yigit
@ 2021-09-07 15:21 0% ` Burakov, Anatoly
2021-09-07 16:08 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Burakov, Anatoly @ 2021-09-07 15:21 UTC (permalink / raw)
To: Ferruh Yigit, Ding, Xuan, Kinsella, Ray, dev
Cc: maxime.coquelin, Xia, Chenbo, Hu, Jiayu, Richardson, Bruce,
Thomas Monjalon
On 06-Sep-21 2:43 PM, Ferruh Yigit wrote:
> On 9/6/2021 9:51 AM, Ding, Xuan wrote:
>> Hi,
>>
>>> -----Original Message-----
>>> From: Kinsella, Ray <mdr@ashroe.eu>
>>> Sent: Friday, September 3, 2021 12:13 AM
>>> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Burakov, Anatoly
>>> <anatoly.burakov@intel.com>; Ding, Xuan <xuan.ding@intel.com>;
>>> dev@dpdk.org
>>> Cc: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>; Hu,
>>> Jiayu <jiayu.hu@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
>>> Thomas Monjalon <thomas@monjalon.net>
>>> Subject: Re: [PATCH] doc: announce change in vfio dma mapping
>>>
>>>
>>>
>>> On 02/09/2021 10:50, Ferruh Yigit wrote:
>>>> On 9/1/2021 2:25 PM, Burakov, Anatoly wrote:
>>>>> On 01-Sep-21 12:42 PM, Ferruh Yigit wrote:
>>>>>> On 9/1/2021 12:01 PM, Burakov, Anatoly wrote:
>>>>>>> On 01-Sep-21 10:56 AM, Ferruh Yigit wrote:
>>>>>>>> On 9/1/2021 2:41 AM, Ding, Xuan wrote:
>>>>>>>>> Hi Ferruh,
>>>>>>>>>
>>>>>>>>>> -----Original Message-----
>>>>>>>>>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>>>>>>>>>> Sent: Wednesday, September 1, 2021 12:01 AM
>>>>>>>>>> To: Ding, Xuan <xuan.ding@intel.com>; dev@dpdk.org; Burakov,
>>> Anatoly
>>>>>>>>>> <anatoly.burakov@intel.com>
>>>>>>>>>> Cc: maxime.coquelin@redhat.com; Xia, Chenbo
>>> <chenbo.xia@intel.com>; Hu,
>>>>>>>>>> Jiayu <jiayu.hu@intel.com>; Richardson, Bruce
>>> <bruce.richardson@intel.com>
>>>>>>>>>> Subject: Re: [PATCH] doc: announce change in vfio dma mapping
>>>>>>>>>>
>>>>>>>>>> On 8/31/2021 2:10 PM, Xuan Ding wrote:
>>>>>>>>>>> Currently, the VFIO subsystem will compact adjacent DMA regions for
>>> the
>>>>>>>>>>> purposes of saving space in the internal list of mappings. This has a
>>>>>>>>>>> side effect of compacting two separate mappings that just happen to
>>> be
>>>>>>>>>>> adjacent in memory. Since VFIO implementation on IA platforms also
>>> does
>>>>>>>>>>> not allow partial unmapping of memory mapped for DMA, the current
>>>>>>>>>> DPDK
>>>>>>>>>>> VFIO implementation will prevent unmapping of accidentally adjacent
>>>>>>>>>>> maps even though it could have been unmapped [1].
>>>>>>>>>>>
>>>>>>>>>>> The proper fix for this issue is to change the VFIO DMA mapping API to
>>>>>>>>>>> also include page size, and always map memory page-by-page.
>>>>>>>>>>>
>>>>>>>>>>> [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
>>>>>>>>>>>
>>>>>>>>>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
>>>>>>>>>>> ---
>>>>>>>>>>> doc/guides/rel_notes/deprecation.rst | 3 +++
>>>>>>>>>>> 1 file changed, 3 insertions(+)
>>>>>>>>>>>
>>>>>>>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>>>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>>>>>>>> index 76a4abfd6b..1234420caf 100644
>>>>>>>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>>>>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>>>>>>>> @@ -287,3 +287,6 @@ Deprecation Notices
>>>>>>>>>>> reserved bytes to 2 (from 3), and use 1 byte to indicate warnings
>>> and
>>>>>>>>>> other
>>>>>>>>>>> information from the crypto/security operation. This field will be
>>>>>>>>>>> used to
>>>>>>>>>>> communicate events such as soft expiry with IPsec in lookaside
>>> mode.
>>>>>>>>>>> +
>>>>>>>>>>> +* vfio: the functions `rte_vfio_container_dma_map` will be amended
>>> to
>>>>>>>>>>> + include page size. This change is targeted for DPDK 22.02.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Is this means adding a new parameter to API?
>>>>>>>>>> If so this is an ABI/API break and we can't do this change in the 22.02.
>>>>>>>>>
>>>>>>>>> Our original plan is add a new parameter in order not to use a new
>>> function
>>>>>>>>> name, so you mean, any changes to the API can only be done in the LTS
>>> version?
>>>>>>>>> If so, we can only add a new API and retire the old one in 22.11.
>>>>>>>>>
>>>>>>>>
>>>>>>>> We can add a new API anytime. Adding new parameter to an existing API
>>> can be
>>>>>>>> done on the ABI break release.
>>>>>>>
>>>>>>> So, 22.11 then?
>>>>>>>
>>>>>>
>>>>>> Yes.
>>>>>>
>>>>>>>>
>>>>>>>> You can add the new API in this release, and start using it.
>>>>>>>> And mark the old API as deprecated in this release. This lets existing
>>> binaries
>>>>>>>> to keep using it, but app needs to switch to new API for compilation.
>>>>>>>> Old API can be removed on 22.11, and you will need a deprecation notice
>>> before
>>>>>>>> 22.11 for it.
>>>>>>>>
>>>>>>>> Is above plan works for you?
>>>>>>>>
>>>>>>>
>>>>>>> We have slightly rethought our approach, and the functionality that Xuan
>>>>>>> requires does not rely on this API. They can, for all intents and purposes, be
>>>>>>> considered unrelated issues.
>>>>>>>
>>>>>>> I still think it's a good idea to update the API that way, so I would like to do
>>>>>>> that, and if we have to wait until 22.11 to fix it, I'm OK with that. Since
>>>>>>> there no longer is any urgency here, it's acceptable to wait for the next LTS
>>> to
>>>>>>> break it.
>>>>>>>
>>>>>>
>>>>>> Got it.
>>>>>>
>>>>>> As far as I understand, main motivation in techboard decision was to
>>> prevent the
>>>>>> ABI break as much as possible (main reason of decision wasn't deprecation
>>> notice
>>>>>> being late). But if the correct thing to do is to rename the API (and break the
>>>>>> ABI), I don't see the benefit to wait one more year, it is just delaying the
>>>>>> impact and adding overhead to us.
>>>>>> I am for being pragmatic and doing the change in this release if API rename
>>> is
>>>>>> better option, perhaps we can visit the issue again in techboard.
>>>>>>
>>>>>> Can you please describe why renaming API is better option, comparing to
>>> adding
>>>>>> new API with new parameter?
>>>>>
>>>>> I take it you meant "why renaming API *isn't* a better option".
>>>>>
>>>>> The problem we're solving is that the API in question does not know about
>>> page
>>>>> sizes and thus can't map segments page-by-page. I mean I /guess/ we could
>>> have
>>>>> two API's (one paged, one not paged), but then we get into all kinds of hairy
>>>>> things about the API leaking the details of underlying platform.
>>>>>
>>>>> Bottom line: i like current API function name. It's concise, it's descriptive.
>>>>> It's only missing a parameter, which i would like to add. A rename has been
>>>>> suggested (deprecate old API, add new API with a different name, and with
>>> added
>>>>> parameter), but honestly, I don't see why we have to do that because this is
>>>>> predicated upon the assumption that we *can't* break ABI at all, under any
>>>>> circumstances.
>>>>>
>>>>> Can you please explain to me what is wrong with keeping a versioned symbol?
>>>>> Like, keep the old function around to keep ABI compatibility, but break the
>>> API
>>>>> compatibility for those who target 22.02 or later? That's what symbol
>>> versioning
>>>>> is *for*, is it not?
>>>>>
>>>>
>>>> Nothing wrong with symbol versioning, indeed that is preferred way if it works
>>>> for you, I didn't get that symbol versioning is planned.
>>>>
>>>> @Ray,
>>>> Since symbol versioning is planned, ABI won't break, but API will change, can
>>>> this change be done in this release without deprecation notice?
>>>
>>> Yes - I would think so.
>>> Since we are going to the effort of using symbol versioning nothing is being
>>> depreciated as such (yet).
>>>
>>>> Later we can have a deprecation notice to remove old symbol on 22.11.
>>
>> Thanks for your explanation.
>> @Yigit, Ferruh Does it mean that we can do API change in 21.11? If so, we will
>> follow the process and target API change in this release. :)
>>
>
> With symbol versioning, yes you can make the change in this release.
>
> You can send another deprecation notice to remove the old symbol for 22.11, that
> can be sent anytime until 22.08 released.
>
Hi Ferruh and others,
We have decided to switch gears somewhat :)
The original intent was to ensure that two adjacent segments can be
freely mapped and unmapped. The easiest solution was page-by-page
mapping, so that's where the API change idea came from.
However, due to tech board decision to not grant the exception at the
time, we have figured out a way to avoid the API change. The API change
is still a good idea because mapping things page-by-page is a valid use
case that is currently not covered by the API, so the next plan was to
do the API change in a later release as a separate issue, not related to
original Xuan's intent.
However, we've been discussing implementation details with Xuan, and we
arrived at a realization that what Xuan wants to implement not only does
not *require* page size, it *cannot* be implemented with page size API
because there's no way to know page size for that memory at the time of
the API call. So, turns out we need both paged and page-less versions :)
This means that there are no deprecation notices now, because we will be
adding a new API after all, but we will *not* deprecate or remove the
old API - that one will still stay valid.
Apologies for constantly shifting ground, but in the end i think we
arrived at the best possible solution for this problem!
--
Thanks,
Anatoly
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [RFC PATCH v6 0/5] Add PIE support for HQoS library
2021-09-07 7:33 3% ` [dpdk-dev] [RFC PATCH v5 0/5] " Liguzinski, WojciechX
@ 2021-09-07 14:11 3% ` Liguzinski, WojciechX
1 sibling, 0 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-09-07 14:11 UTC (permalink / raw)
To: dev, jasvinder.singh, cristian.dumitrescu; +Cc: megha.ajmera
DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management (which is designed
to control the queue length but it does not control latency directly and is now being
obsoleted). However, more advanced queue management is required to address this problem
and provide desirable quality of service to users.
This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.
The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.
Liguzinski, WojciechX (3):
sched: add PIE based congestion management
example/qos_sched: add PIE support
example/ip_pipeline: add PIE support
doc/guides/prog_guide: added PIE
app/test: add tests for PIE
app/test/autotest_data.py | 18 +
app/test/meson.build | 4 +
app/test/test_pie.c | 1065 ++++++++++++++++++
config/rte_config.h | 1 -
doc/guides/prog_guide/glossary.rst | 3 +
doc/guides/prog_guide/qos_framework.rst | 60 +-
doc/guides/prog_guide/traffic_management.rst | 13 +-
drivers/net/softnic/rte_eth_softnic_tm.c | 6 +-
examples/ip_pipeline/tmgr.c | 6 +-
examples/qos_sched/app_thread.c | 1 -
examples/qos_sched/cfg_file.c | 82 +-
examples/qos_sched/init.c | 7 +-
examples/qos_sched/profile.cfg | 196 ++--
lib/sched/meson.build | 10 +-
lib/sched/rte_pie.c | 86 ++
lib/sched/rte_pie.h | 398 +++++++
lib/sched/rte_sched.c | 228 ++--
lib/sched/rte_sched.h | 53 +-
lib/sched/version.map | 3 +
19 files changed, 2050 insertions(+), 190 deletions(-)
create mode 100644 app/test/test_pie.c
create mode 100644 lib/sched/rte_pie.c
create mode 100644 lib/sched/rte_pie.h
--
2.25.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v21 0/7] support dmadev
2021-09-04 10:10 3% ` [dpdk-dev] [PATCH v20 0/7] support dmadev Chengwen Feng
@ 2021-09-07 12:56 3% ` Chengwen Feng
1 sibling, 0 replies; 200+ results
From: Chengwen Feng @ 2021-09-07 12:56 UTC (permalink / raw)
To: thomas, ferruh.yigit, bruce.richardson, jerinj, jerinjacobk,
andrew.rybchenko
Cc: dev, mb, nipun.gupta, hemant.agrawal, maxime.coquelin,
honnappa.nagarahalli, david.marchand, sburla, pkapoor,
konstantin.ananyev, conor.walsh, kevin.laatz
This patch set contains seven patch for new add dmadev.
Chengwen Feng (7):
dmadev: introduce DMA device library public APIs
dmadev: introduce DMA device library internal header
dmadev: introduce DMA device library PMD header
dmadev: introduce DMA device library implementation
doc: add DMA device library guide
dma/skeleton: introduce skeleton dmadev driver
app/test: add dmadev API test
---
v21:
* add comment for reserved fields of struct rte_dmadev.
v20:
* delete unnecessary and duplicate include header files.
* the conf_sz parameter is added to the configure and vchan-setup
callbacks of the PMD, this is mainly used to enhance ABI
compatibility.
* the rte_dmadev structure field is rearranged to reserve more space
for I/O functions.
* fix some ambiguous and unnecessary comments.
* fix the potential memory leak of ut.
* redefine skeldma_init_once to skeldma_count.
* suppress rte_dmadev error output when execute ut.
v19:
* squash maintainer patch to patch #1.
v18:
* RTE_DMA_STATUS_* add BUS_READ/WRITE_ERR, PAGE_FAULT.
* rte_dmadev dataplane API add judge dev_started when debug enable.
* rte_dmadev_start/vchan_setup add judge device configured.
* rte_dmadev_dump support format capability name.
* optimized the comments of rte_dmadev.
* fix skeldma_copy always return zero when enqueue successful.
* log encapsulation macro add newline characters.
* test_dmadev_api support rte_dmadev_dump() ut.
MAINTAINERS | 7 +
app/test/meson.build | 4 +
app/test/test_dmadev.c | 43 +
app/test/test_dmadev_api.c | 543 ++++++++++++
config/rte_config.h | 3 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/dmadev.rst | 125 +++
doc/guides/prog_guide/img/dmadev.svg | 283 +++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 5 +
drivers/dma/meson.build | 11 +
drivers/dma/skeleton/meson.build | 7 +
drivers/dma/skeleton/skeleton_dmadev.c | 594 ++++++++++++++
drivers/dma/skeleton/skeleton_dmadev.h | 61 ++
drivers/dma/skeleton/version.map | 3 +
drivers/meson.build | 1 +
lib/dmadev/meson.build | 7 +
lib/dmadev/rte_dmadev.c | 607 ++++++++++++++
lib/dmadev/rte_dmadev.h | 1047 ++++++++++++++++++++++++
lib/dmadev/rte_dmadev_core.h | 187 +++++
lib/dmadev/rte_dmadev_pmd.h | 72 ++
lib/dmadev/version.map | 35 +
lib/meson.build | 1 +
24 files changed, 3649 insertions(+)
create mode 100644 app/test/test_dmadev.c
create mode 100644 app/test/test_dmadev_api.c
create mode 100644 doc/guides/prog_guide/dmadev.rst
create mode 100644 doc/guides/prog_guide/img/dmadev.svg
create mode 100644 drivers/dma/meson.build
create mode 100644 drivers/dma/skeleton/meson.build
create mode 100644 drivers/dma/skeleton/skeleton_dmadev.c
create mode 100644 drivers/dma/skeleton/skeleton_dmadev.h
create mode 100644 drivers/dma/skeleton/version.map
create mode 100644 lib/dmadev/meson.build
create mode 100644 lib/dmadev/rte_dmadev.c
create mode 100644 lib/dmadev/rte_dmadev.h
create mode 100644 lib/dmadev/rte_dmadev_core.h
create mode 100644 lib/dmadev/rte_dmadev_pmd.h
create mode 100644 lib/dmadev/version.map
--
2.33.0
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v3] eventdev: update crypto adapter metadata structures
2021-09-07 8:53 0% ` Gujjar, Abhinandan S
@ 2021-09-07 10:37 0% ` Shijith Thotton
2021-09-07 17:30 0% ` Gujjar, Abhinandan S
0 siblings, 1 reply; 200+ results
From: Shijith Thotton @ 2021-09-07 10:37 UTC (permalink / raw)
To: Gujjar, Abhinandan S, dev
Cc: Jerin Jacob Kollanukkaran, Anoob Joseph,
Pavan Nikhilesh Bhagavatula, Akhil Goyal, Ray Kinsella,
Ankur Dwivedi
Hi Abhinandan,
>> In crypto adapter metadata, reserved bytes in request info structure is a
>> space holder for response info. It enforces an order of operation if the
>> structures are updated using memcpy to avoid overwriting response info. It
>> is logical to move the reserved space out of request info. It also solves the
>> ordering issue mentioned before.
>I would like to understand what kind of ordering issue you have faced with
>the current approach. Could you please give an example/sequence and explain?
>
I have seen this issue with crypto adapter autotest (#n215).
Example:
rte_memcpy(&m_data.response_info, &response_info, sizeof(response_info));
rte_memcpy(&m_data.request_info, &request_info, sizeof(request_info));
Here response info is getting overwritten by request info.
Above lines can reordered to fix the issue, but can be ignored with this patch.
>>
>> This patch removes the reserve field from request info and makes event
>> crypto metadata type to structure from union to make space for response
>> info.
>>
>> App and drivers are updated as per metadata change.
>>
>> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
>> Acked-by: Anoob Joseph <anoobj@marvell.com>
>> ---
>> v3:
>> * Updated ABI section of release notes.
>>
>> v2:
>> * Updated deprecation notice.
>>
>> v1:
>> * Rebased.
>>
>> app/test/test_event_crypto_adapter.c | 14 +++++++-------
>> doc/guides/rel_notes/deprecation.rst | 6 ------
>> doc/guides/rel_notes/release_21_11.rst | 2 ++
>> drivers/crypto/octeontx/otx_cryptodev_ops.c | 8 ++++----
>> drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 4 ++--
>> .../event/octeontx2/otx2_evdev_crypto_adptr_tx.h | 4 ++--
>> lib/eventdev/rte_event_crypto_adapter.c | 8 ++++----
>> lib/eventdev/rte_event_crypto_adapter.h | 15 +++++----------
>> 8 files changed, 26 insertions(+), 35 deletions(-)
>>
>> diff --git a/app/test/test_event_crypto_adapter.c
>> b/app/test/test_event_crypto_adapter.c
>> index 3ad20921e2..0d73694d3a 100644
>> --- a/app/test/test_event_crypto_adapter.c
>> +++ b/app/test/test_event_crypto_adapter.c
>> @@ -168,7 +168,7 @@ test_op_forward_mode(uint8_t session_less) {
>> struct rte_crypto_sym_xform cipher_xform;
>> struct rte_cryptodev_sym_session *sess;
>> - union rte_event_crypto_metadata m_data;
>> + struct rte_event_crypto_metadata m_data;
>> struct rte_crypto_sym_op *sym_op;
>> struct rte_crypto_op *op;
>> struct rte_mbuf *m;
>> @@ -368,7 +368,7 @@ test_op_new_mode(uint8_t session_less) {
>> struct rte_crypto_sym_xform cipher_xform;
>> struct rte_cryptodev_sym_session *sess;
>> - union rte_event_crypto_metadata m_data;
>> + struct rte_event_crypto_metadata m_data;
>> struct rte_crypto_sym_op *sym_op;
>> struct rte_crypto_op *op;
>> struct rte_mbuf *m;
>> @@ -406,7 +406,7 @@ test_op_new_mode(uint8_t session_less)
>> if (cap &
>> RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA) {
>> /* Fill in private user data information */
>> rte_memcpy(&m_data.response_info,
>> &response_info,
>> - sizeof(m_data));
>> + sizeof(response_info));
>> rte_cryptodev_sym_session_set_user_data(sess,
>> &m_data, sizeof(m_data));
>> }
>> @@ -426,7 +426,7 @@ test_op_new_mode(uint8_t session_less)
>> op->private_data_offset = len;
>> /* Fill in private data information */
>> rte_memcpy(&m_data.response_info, &response_info,
>> - sizeof(m_data));
>> + sizeof(response_info));
>> rte_memcpy((uint8_t *)op + len, &m_data, sizeof(m_data));
>> }
>>
>> @@ -519,7 +519,7 @@ configure_cryptodev(void)
>> DEFAULT_NUM_XFORMS *
>> sizeof(struct rte_crypto_sym_xform) +
>> MAXIMUM_IV_LENGTH +
>> - sizeof(union rte_event_crypto_metadata),
>> + sizeof(struct rte_event_crypto_metadata),
>> rte_socket_id());
>> if (params.op_mpool == NULL) {
>> RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
>> @@ -549,12 +549,12 @@ configure_cryptodev(void)
>> * to include the session headers & private data
>> */
>> session_size =
>> rte_cryptodev_sym_get_private_session_size(TEST_CDEV_ID);
>> - session_size += sizeof(union rte_event_crypto_metadata);
>> + session_size += sizeof(struct rte_event_crypto_metadata);
>>
>> params.session_mpool = rte_cryptodev_sym_session_pool_create(
>> "CRYPTO_ADAPTER_SESSION_MP",
>> MAX_NB_SESSIONS, 0, 0,
>> - sizeof(union rte_event_crypto_metadata),
>> + sizeof(struct rte_event_crypto_metadata),
>> SOCKET_ID_ANY);
>> TEST_ASSERT_NOT_NULL(params.session_mpool,
>> "session mempool allocation failed\n"); diff --git
>> a/doc/guides/rel_notes/deprecation.rst
>> b/doc/guides/rel_notes/deprecation.rst
>> index 76a4abfd6b..58ee95c020 100644
>> --- a/doc/guides/rel_notes/deprecation.rst
>> +++ b/doc/guides/rel_notes/deprecation.rst
>> @@ -266,12 +266,6 @@ Deprecation Notices
>> values to the function ``rte_event_eth_rx_adapter_queue_add`` using
>> the structure ``rte_event_eth_rx_adapter_queue_add``.
>>
>> -* eventdev: Reserved bytes of ``rte_event_crypto_request`` is a space
>> holder
>> - for ``response_info``. Both should be decoupled for better clarity.
>> - New space for ``response_info`` can be made by changing
>> - ``rte_event_crypto_metadata`` type to structure from union.
>> - This change is targeted for DPDK 21.11.
>> -
>> * metrics: The function ``rte_metrics_init`` will have a non-void return
>> in order to notify errors instead of calling ``rte_exit``.
>>
>> diff --git a/doc/guides/rel_notes/release_21_11.rst
>> b/doc/guides/rel_notes/release_21_11.rst
>> index d707a554ef..ab76d5dd55 100644
>> --- a/doc/guides/rel_notes/release_21_11.rst
>> +++ b/doc/guides/rel_notes/release_21_11.rst
>> @@ -100,6 +100,8 @@ ABI Changes
>> Also, make sure to start the actual text at the margin.
>> =======================================================
>>
>> +* eventdev: Modified type of ``union rte_event_crypto_metadata`` to
>> +struct and
>> + removed reserved bytes from ``struct rte_event_crypto_request``.
>>
>> Known Issues
>> ------------
>> diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c
>> b/drivers/crypto/octeontx/otx_cryptodev_ops.c
>> index eac6796cfb..c51be63146 100644
>> --- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
>> +++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
>> @@ -710,17 +710,17 @@ submit_request_to_sso(struct ssows *ws,
>> uintptr_t req,
>> ssovf_store_pair(add_work, req, ws->grps[rsp_info->queue_id]); }
>>
>> -static inline union rte_event_crypto_metadata *
>> +static inline struct rte_event_crypto_metadata *
>> get_event_crypto_mdata(struct rte_crypto_op *op) {
>> - union rte_event_crypto_metadata *ec_mdata;
>> + struct rte_event_crypto_metadata *ec_mdata;
>>
>> if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
>> ec_mdata = rte_cryptodev_sym_session_get_user_data(
>> op->sym->session);
>> else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
>> op->private_data_offset)
>> - ec_mdata = (union rte_event_crypto_metadata *)
>> + ec_mdata = (struct rte_event_crypto_metadata *)
>> ((uint8_t *)op + op->private_data_offset);
>> else
>> return NULL;
>> @@ -731,7 +731,7 @@ get_event_crypto_mdata(struct rte_crypto_op *op)
>> uint16_t __rte_hot otx_crypto_adapter_enqueue(void *port, struct
>> rte_crypto_op *op) {
>> - union rte_event_crypto_metadata *ec_mdata;
>> + struct rte_event_crypto_metadata *ec_mdata;
>> struct cpt_instance *instance;
>> struct cpt_request_info *req;
>> struct rte_event *rsp_info;
>> diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
>> b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
>> index 42100154cd..952d1352f4 100644
>> --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
>> +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
>> @@ -453,7 +453,7 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp,
>> struct rte_crypto_op *op,
>> uint64_t cpt_inst_w7)
>> {
>> - union rte_event_crypto_metadata *m_data;
>> + struct rte_event_crypto_metadata *m_data;
>> union cpt_inst_s inst;
>> uint64_t lmt_status;
>>
>> @@ -468,7 +468,7 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp,
>> }
>> } else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
>> op->private_data_offset) {
>> - m_data = (union rte_event_crypto_metadata *)
>> + m_data = (struct rte_event_crypto_metadata *)
>> ((uint8_t *)op +
>> op->private_data_offset);
>> } else {
>> diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
>> b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
>> index ecf7eb9f56..458e8306d7 100644
>> --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
>> +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
>> @@ -16,7 +16,7 @@
>> static inline uint16_t
>> otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) {
>> - union rte_event_crypto_metadata *m_data;
>> + struct rte_event_crypto_metadata *m_data;
>> struct rte_crypto_op *crypto_op;
>> struct rte_cryptodev *cdev;
>> struct otx2_cpt_qp *qp;
>> @@ -37,7 +37,7 @@ otx2_ca_enq(uintptr_t tag_op, const struct rte_event
>> *ev)
>> qp_id = m_data->request_info.queue_pair_id;
>> } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS
>> &&
>> crypto_op->private_data_offset) {
>> - m_data = (union rte_event_crypto_metadata *)
>> + m_data = (struct rte_event_crypto_metadata *)
>> ((uint8_t *)crypto_op +
>> crypto_op->private_data_offset);
>> cdev_id = m_data->request_info.cdev_id; diff --git
>> a/lib/eventdev/rte_event_crypto_adapter.c
>> b/lib/eventdev/rte_event_crypto_adapter.c
>> index e1d38d383d..6977391ae9 100644
>> --- a/lib/eventdev/rte_event_crypto_adapter.c
>> +++ b/lib/eventdev/rte_event_crypto_adapter.c
>> @@ -333,7 +333,7 @@ eca_enq_to_cryptodev(struct
>> rte_event_crypto_adapter *adapter,
>> struct rte_event *ev, unsigned int cnt) {
>> struct rte_event_crypto_adapter_stats *stats = &adapter-
>> >crypto_stats;
>> - union rte_event_crypto_metadata *m_data = NULL;
>> + struct rte_event_crypto_metadata *m_data = NULL;
>> struct crypto_queue_pair_info *qp_info = NULL;
>> struct rte_crypto_op *crypto_op;
>> unsigned int i, n;
>> @@ -371,7 +371,7 @@ eca_enq_to_cryptodev(struct
>> rte_event_crypto_adapter *adapter,
>> len++;
>> } else if (crypto_op->sess_type ==
>> RTE_CRYPTO_OP_SESSIONLESS &&
>> crypto_op->private_data_offset) {
>> - m_data = (union rte_event_crypto_metadata *)
>> + m_data = (struct rte_event_crypto_metadata *)
>> ((uint8_t *)crypto_op +
>> crypto_op->private_data_offset);
>> cdev_id = m_data->request_info.cdev_id; @@ -
>> 504,7 +504,7 @@ eca_ops_enqueue_burst(struct rte_event_crypto_adapter
>> *adapter,
>> struct rte_crypto_op **ops, uint16_t num) {
>> struct rte_event_crypto_adapter_stats *stats = &adapter-
>> >crypto_stats;
>> - union rte_event_crypto_metadata *m_data = NULL;
>> + struct rte_event_crypto_metadata *m_data = NULL;
>> uint8_t event_dev_id = adapter->eventdev_id;
>> uint8_t event_port_id = adapter->event_port_id;
>> struct rte_event events[BATCH_SIZE];
>> @@ -523,7 +523,7 @@ eca_ops_enqueue_burst(struct
>> rte_event_crypto_adapter *adapter,
>> ops[i]->sym->session);
>> } else if (ops[i]->sess_type ==
>> RTE_CRYPTO_OP_SESSIONLESS &&
>> ops[i]->private_data_offset) {
>> - m_data = (union rte_event_crypto_metadata *)
>> + m_data = (struct rte_event_crypto_metadata *)
>> ((uint8_t *)ops[i] +
>> ops[i]->private_data_offset);
>> }
>> diff --git a/lib/eventdev/rte_event_crypto_adapter.h
>> b/lib/eventdev/rte_event_crypto_adapter.h
>> index f8c6cca87c..3c24d9d9df 100644
>> --- a/lib/eventdev/rte_event_crypto_adapter.h
>> +++ b/lib/eventdev/rte_event_crypto_adapter.h
>> @@ -200,11 +200,6 @@ enum rte_event_crypto_adapter_mode {
>> * provide event request information to the adapter.
>> */
>> struct rte_event_crypto_request {
>> - uint8_t resv[8];
>> - /**< Overlaps with first 8 bytes of struct rte_event
>> - * that encode the response event information. Application
>> - * is expected to fill in struct rte_event response_info.
>> - */
>> uint16_t cdev_id;
>> /**< cryptodev ID to be used */
>> uint16_t queue_pair_id;
>> @@ -223,16 +218,16 @@ struct rte_event_crypto_request {
>> * operation. If the transfer is done by SW, event response information
>> * will be used by the adapter.
>> */
>> -union rte_event_crypto_metadata {
>> - struct rte_event_crypto_request request_info;
>> - /**< Request information to be filled in by application
>> - * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
>> - */
>> +struct rte_event_crypto_metadata {
>> struct rte_event response_info;
>> /**< Response information to be filled in by application
>> * for RTE_EVENT_CRYPTO_ADAPTER_OP_NEW and
>> * RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
>> */
>> + struct rte_event_crypto_request request_info;
>> + /**< Request information to be filled in by application
>> + * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
>> + */
>> };
>>
>> /**
>> --
>> 2.25.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3] eventdev: update crypto adapter metadata structures
2021-08-31 7:56 6% ` [dpdk-dev] [PATCH v3] " Shijith Thotton
2021-09-07 8:34 0% ` Jerin Jacob
@ 2021-09-07 8:53 0% ` Gujjar, Abhinandan S
2021-09-07 10:37 0% ` Shijith Thotton
1 sibling, 1 reply; 200+ results
From: Gujjar, Abhinandan S @ 2021-09-07 8:53 UTC (permalink / raw)
To: Shijith Thotton, dev
Cc: jerinj, anoobj, pbhagavatula, gakhil, Ray Kinsella, Ankur Dwivedi
Hi Shijith,
> -----Original Message-----
> From: Shijith Thotton <sthotton@marvell.com>
> Sent: Tuesday, August 31, 2021 1:27 PM
> To: dev@dpdk.org
> Cc: Shijith Thotton <sthotton@marvell.com>; jerinj@marvell.com;
> anoobj@marvell.com; pbhagavatula@marvell.com; gakhil@marvell.com;
> Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; Ray Kinsella
> <mdr@ashroe.eu>; Ankur Dwivedi <adwivedi@marvell.com>
> Subject: [PATCH v3] eventdev: update crypto adapter metadata structures
>
> In crypto adapter metadata, reserved bytes in request info structure is a
> space holder for response info. It enforces an order of operation if the
> structures are updated using memcpy to avoid overwriting response info. It
> is logical to move the reserved space out of request info. It also solves the
> ordering issue mentioned before.
I would like to understand what kind of ordering issue you have faced with
the current approach. Could you please give an example/sequence and explain?
>
> This patch removes the reserve field from request info and makes event
> crypto metadata type to structure from union to make space for response
> info.
>
> App and drivers are updated as per metadata change.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> Acked-by: Anoob Joseph <anoobj@marvell.com>
> ---
> v3:
> * Updated ABI section of release notes.
>
> v2:
> * Updated deprecation notice.
>
> v1:
> * Rebased.
>
> app/test/test_event_crypto_adapter.c | 14 +++++++-------
> doc/guides/rel_notes/deprecation.rst | 6 ------
> doc/guides/rel_notes/release_21_11.rst | 2 ++
> drivers/crypto/octeontx/otx_cryptodev_ops.c | 8 ++++----
> drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 4 ++--
> .../event/octeontx2/otx2_evdev_crypto_adptr_tx.h | 4 ++--
> lib/eventdev/rte_event_crypto_adapter.c | 8 ++++----
> lib/eventdev/rte_event_crypto_adapter.h | 15 +++++----------
> 8 files changed, 26 insertions(+), 35 deletions(-)
>
> diff --git a/app/test/test_event_crypto_adapter.c
> b/app/test/test_event_crypto_adapter.c
> index 3ad20921e2..0d73694d3a 100644
> --- a/app/test/test_event_crypto_adapter.c
> +++ b/app/test/test_event_crypto_adapter.c
> @@ -168,7 +168,7 @@ test_op_forward_mode(uint8_t session_less) {
> struct rte_crypto_sym_xform cipher_xform;
> struct rte_cryptodev_sym_session *sess;
> - union rte_event_crypto_metadata m_data;
> + struct rte_event_crypto_metadata m_data;
> struct rte_crypto_sym_op *sym_op;
> struct rte_crypto_op *op;
> struct rte_mbuf *m;
> @@ -368,7 +368,7 @@ test_op_new_mode(uint8_t session_less) {
> struct rte_crypto_sym_xform cipher_xform;
> struct rte_cryptodev_sym_session *sess;
> - union rte_event_crypto_metadata m_data;
> + struct rte_event_crypto_metadata m_data;
> struct rte_crypto_sym_op *sym_op;
> struct rte_crypto_op *op;
> struct rte_mbuf *m;
> @@ -406,7 +406,7 @@ test_op_new_mode(uint8_t session_less)
> if (cap &
> RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA) {
> /* Fill in private user data information */
> rte_memcpy(&m_data.response_info,
> &response_info,
> - sizeof(m_data));
> + sizeof(response_info));
> rte_cryptodev_sym_session_set_user_data(sess,
> &m_data, sizeof(m_data));
> }
> @@ -426,7 +426,7 @@ test_op_new_mode(uint8_t session_less)
> op->private_data_offset = len;
> /* Fill in private data information */
> rte_memcpy(&m_data.response_info, &response_info,
> - sizeof(m_data));
> + sizeof(response_info));
> rte_memcpy((uint8_t *)op + len, &m_data, sizeof(m_data));
> }
>
> @@ -519,7 +519,7 @@ configure_cryptodev(void)
> DEFAULT_NUM_XFORMS *
> sizeof(struct rte_crypto_sym_xform) +
> MAXIMUM_IV_LENGTH +
> - sizeof(union rte_event_crypto_metadata),
> + sizeof(struct rte_event_crypto_metadata),
> rte_socket_id());
> if (params.op_mpool == NULL) {
> RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
> @@ -549,12 +549,12 @@ configure_cryptodev(void)
> * to include the session headers & private data
> */
> session_size =
> rte_cryptodev_sym_get_private_session_size(TEST_CDEV_ID);
> - session_size += sizeof(union rte_event_crypto_metadata);
> + session_size += sizeof(struct rte_event_crypto_metadata);
>
> params.session_mpool = rte_cryptodev_sym_session_pool_create(
> "CRYPTO_ADAPTER_SESSION_MP",
> MAX_NB_SESSIONS, 0, 0,
> - sizeof(union rte_event_crypto_metadata),
> + sizeof(struct rte_event_crypto_metadata),
> SOCKET_ID_ANY);
> TEST_ASSERT_NOT_NULL(params.session_mpool,
> "session mempool allocation failed\n"); diff --git
> a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index 76a4abfd6b..58ee95c020 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -266,12 +266,6 @@ Deprecation Notices
> values to the function ``rte_event_eth_rx_adapter_queue_add`` using
> the structure ``rte_event_eth_rx_adapter_queue_add``.
>
> -* eventdev: Reserved bytes of ``rte_event_crypto_request`` is a space
> holder
> - for ``response_info``. Both should be decoupled for better clarity.
> - New space for ``response_info`` can be made by changing
> - ``rte_event_crypto_metadata`` type to structure from union.
> - This change is targeted for DPDK 21.11.
> -
> * metrics: The function ``rte_metrics_init`` will have a non-void return
> in order to notify errors instead of calling ``rte_exit``.
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index d707a554ef..ab76d5dd55 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -100,6 +100,8 @@ ABI Changes
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* eventdev: Modified type of ``union rte_event_crypto_metadata`` to
> +struct and
> + removed reserved bytes from ``struct rte_event_crypto_request``.
>
> Known Issues
> ------------
> diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c
> b/drivers/crypto/octeontx/otx_cryptodev_ops.c
> index eac6796cfb..c51be63146 100644
> --- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
> +++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
> @@ -710,17 +710,17 @@ submit_request_to_sso(struct ssows *ws,
> uintptr_t req,
> ssovf_store_pair(add_work, req, ws->grps[rsp_info->queue_id]); }
>
> -static inline union rte_event_crypto_metadata *
> +static inline struct rte_event_crypto_metadata *
> get_event_crypto_mdata(struct rte_crypto_op *op) {
> - union rte_event_crypto_metadata *ec_mdata;
> + struct rte_event_crypto_metadata *ec_mdata;
>
> if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
> ec_mdata = rte_cryptodev_sym_session_get_user_data(
> op->sym->session);
> else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
> op->private_data_offset)
> - ec_mdata = (union rte_event_crypto_metadata *)
> + ec_mdata = (struct rte_event_crypto_metadata *)
> ((uint8_t *)op + op->private_data_offset);
> else
> return NULL;
> @@ -731,7 +731,7 @@ get_event_crypto_mdata(struct rte_crypto_op *op)
> uint16_t __rte_hot otx_crypto_adapter_enqueue(void *port, struct
> rte_crypto_op *op) {
> - union rte_event_crypto_metadata *ec_mdata;
> + struct rte_event_crypto_metadata *ec_mdata;
> struct cpt_instance *instance;
> struct cpt_request_info *req;
> struct rte_event *rsp_info;
> diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> index 42100154cd..952d1352f4 100644
> --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> @@ -453,7 +453,7 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp,
> struct rte_crypto_op *op,
> uint64_t cpt_inst_w7)
> {
> - union rte_event_crypto_metadata *m_data;
> + struct rte_event_crypto_metadata *m_data;
> union cpt_inst_s inst;
> uint64_t lmt_status;
>
> @@ -468,7 +468,7 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp,
> }
> } else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
> op->private_data_offset) {
> - m_data = (union rte_event_crypto_metadata *)
> + m_data = (struct rte_event_crypto_metadata *)
> ((uint8_t *)op +
> op->private_data_offset);
> } else {
> diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> index ecf7eb9f56..458e8306d7 100644
> --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> @@ -16,7 +16,7 @@
> static inline uint16_t
> otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) {
> - union rte_event_crypto_metadata *m_data;
> + struct rte_event_crypto_metadata *m_data;
> struct rte_crypto_op *crypto_op;
> struct rte_cryptodev *cdev;
> struct otx2_cpt_qp *qp;
> @@ -37,7 +37,7 @@ otx2_ca_enq(uintptr_t tag_op, const struct rte_event
> *ev)
> qp_id = m_data->request_info.queue_pair_id;
> } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS
> &&
> crypto_op->private_data_offset) {
> - m_data = (union rte_event_crypto_metadata *)
> + m_data = (struct rte_event_crypto_metadata *)
> ((uint8_t *)crypto_op +
> crypto_op->private_data_offset);
> cdev_id = m_data->request_info.cdev_id; diff --git
> a/lib/eventdev/rte_event_crypto_adapter.c
> b/lib/eventdev/rte_event_crypto_adapter.c
> index e1d38d383d..6977391ae9 100644
> --- a/lib/eventdev/rte_event_crypto_adapter.c
> +++ b/lib/eventdev/rte_event_crypto_adapter.c
> @@ -333,7 +333,7 @@ eca_enq_to_cryptodev(struct
> rte_event_crypto_adapter *adapter,
> struct rte_event *ev, unsigned int cnt) {
> struct rte_event_crypto_adapter_stats *stats = &adapter-
> >crypto_stats;
> - union rte_event_crypto_metadata *m_data = NULL;
> + struct rte_event_crypto_metadata *m_data = NULL;
> struct crypto_queue_pair_info *qp_info = NULL;
> struct rte_crypto_op *crypto_op;
> unsigned int i, n;
> @@ -371,7 +371,7 @@ eca_enq_to_cryptodev(struct
> rte_event_crypto_adapter *adapter,
> len++;
> } else if (crypto_op->sess_type ==
> RTE_CRYPTO_OP_SESSIONLESS &&
> crypto_op->private_data_offset) {
> - m_data = (union rte_event_crypto_metadata *)
> + m_data = (struct rte_event_crypto_metadata *)
> ((uint8_t *)crypto_op +
> crypto_op->private_data_offset);
> cdev_id = m_data->request_info.cdev_id; @@ -
> 504,7 +504,7 @@ eca_ops_enqueue_burst(struct rte_event_crypto_adapter
> *adapter,
> struct rte_crypto_op **ops, uint16_t num) {
> struct rte_event_crypto_adapter_stats *stats = &adapter-
> >crypto_stats;
> - union rte_event_crypto_metadata *m_data = NULL;
> + struct rte_event_crypto_metadata *m_data = NULL;
> uint8_t event_dev_id = adapter->eventdev_id;
> uint8_t event_port_id = adapter->event_port_id;
> struct rte_event events[BATCH_SIZE];
> @@ -523,7 +523,7 @@ eca_ops_enqueue_burst(struct
> rte_event_crypto_adapter *adapter,
> ops[i]->sym->session);
> } else if (ops[i]->sess_type ==
> RTE_CRYPTO_OP_SESSIONLESS &&
> ops[i]->private_data_offset) {
> - m_data = (union rte_event_crypto_metadata *)
> + m_data = (struct rte_event_crypto_metadata *)
> ((uint8_t *)ops[i] +
> ops[i]->private_data_offset);
> }
> diff --git a/lib/eventdev/rte_event_crypto_adapter.h
> b/lib/eventdev/rte_event_crypto_adapter.h
> index f8c6cca87c..3c24d9d9df 100644
> --- a/lib/eventdev/rte_event_crypto_adapter.h
> +++ b/lib/eventdev/rte_event_crypto_adapter.h
> @@ -200,11 +200,6 @@ enum rte_event_crypto_adapter_mode {
> * provide event request information to the adapter.
> */
> struct rte_event_crypto_request {
> - uint8_t resv[8];
> - /**< Overlaps with first 8 bytes of struct rte_event
> - * that encode the response event information. Application
> - * is expected to fill in struct rte_event response_info.
> - */
> uint16_t cdev_id;
> /**< cryptodev ID to be used */
> uint16_t queue_pair_id;
> @@ -223,16 +218,16 @@ struct rte_event_crypto_request {
> * operation. If the transfer is done by SW, event response information
> * will be used by the adapter.
> */
> -union rte_event_crypto_metadata {
> - struct rte_event_crypto_request request_info;
> - /**< Request information to be filled in by application
> - * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
> - */
> +struct rte_event_crypto_metadata {
> struct rte_event response_info;
> /**< Response information to be filled in by application
> * for RTE_EVENT_CRYPTO_ADAPTER_OP_NEW and
> * RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
> */
> + struct rte_event_crypto_request request_info;
> + /**< Request information to be filled in by application
> + * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
> + */
> };
>
> /**
> --
> 2.25.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] RFC: ethdev: add reassembly offload
@ 2021-09-07 8:47 4% ` Ferruh Yigit
2021-09-08 10:29 0% ` [dpdk-dev] [EXT] " Anoob Joseph
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-09-07 8:47 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, adwivedi, andrew.rybchenko
On 8/23/2021 11:02 AM, Akhil Goyal wrote:
> Reassembly is a costly operation if it is done in
> software, however, if it is offloaded to HW, it can
> considerably save application cycles.
> The operation becomes even more costlier if IP fragmants
> are encrypted.
>
> To resolve above two issues, a new offload
> DEV_RX_OFFLOAD_REASSEMBLY is introduced in ethdev for
> devices which can attempt reassembly of packets in hardware.
> rte_eth_dev_info is added with the reassembly capabilities
> which a device can support.
> Now, if IP fragments are encrypted, reassembly can also be
> attempted while doing inline IPsec processing.
> This is controlled by a flag in rte_security_ipsec_sa_options
> to enable reassembly of encrypted IP fragments in the inline
> path.
>
> The resulting reassembled packet would be a typical
> segmented mbuf in case of success.
>
> And if reassembly of fragments is failed or is incomplete (if
> fragments do not come before the reass_timeout), the mbuf is
> updated with an ol_flag PKT_RX_REASSEMBLY_INCOMPLETE and
> mbuf is returned as is. Now application may decide the fate
> of the packet to wait more for fragments to come or drop.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
> lib/ethdev/rte_ethdev.c | 1 +
> lib/ethdev/rte_ethdev.h | 18 +++++++++++++++++-
> lib/mbuf/rte_mbuf_core.h | 3 ++-
> lib/security/rte_security.h | 10 ++++++++++
> 4 files changed, 30 insertions(+), 2 deletions(-)
>
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 9d95cd11e1..1ab3a093cf 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -119,6 +119,7 @@ static const struct {
> RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
> RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
> RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
> + RTE_RX_OFFLOAD_BIT2STR(REASSEMBLY),
> RTE_RX_OFFLOAD_BIT2STR(SCATTER),
> RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
> RTE_RX_OFFLOAD_BIT2STR(SECURITY),
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index d2b27c351f..e89a4dc1eb 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1360,6 +1360,7 @@ struct rte_eth_conf {
> #define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
> #define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
> #define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
> +#define DEV_RX_OFFLOAD_REASSEMBLY 0x00001000
previous '0x00001000' was 'DEV_RX_OFFLOAD_CRC_STRIP', it has been long that
offload has been removed, but not sure if it cause any problem to re-use it.
> #define DEV_RX_OFFLOAD_SCATTER 0x00002000
> /**
> * Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
> @@ -1477,6 +1478,20 @@ struct rte_eth_dev_portconf {
> */
> #define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID (UINT16_MAX)
>
> +/**
> + * Reassembly capabilities that a device can support.
> + * The device which can support reassembly offload should set
> + * DEV_RX_OFFLOAD_REASSEMBLY
> + */
> +struct rte_eth_reass_capa {
> + /** Maximum time in ns that a fragment can wait for further fragments */
> + uint64_t reass_timeout;
> + /** Maximum number of fragments that device can reassemble */
> + uint16_t max_frags;
> + /** Reserved for future capabilities */
> + uint16_t reserved[3];
> +};
> +
I wonder if there is any other hardware around supports reassembly offload, it
would be good to get more feedback on the capabilities list.
> /**
> * Ethernet device associated switch information
> */
> @@ -1582,8 +1597,9 @@ struct rte_eth_dev_info {
> * embedded managed interconnect/switch.
> */
> struct rte_eth_switch_info switch_info;
> + /* Reassembly capabilities of a device for reassembly offload */
> + struct rte_eth_reass_capa reass_capa;
>
> - uint64_t reserved_64s[2]; /**< Reserved for future fields */
Reserved fields were added to be able to update the struct without breaking the
ABI, so that a critical change doesn't have to wait until next ABI break release.
Since this is ABI break release, we can keep the reserved field and add the new
struct. Or this can be an opportunity to get rid of the reserved field.
Personally I have no objection to get rid of the reserved field, but better to
agree on this explicitly.
> void *reserved_ptrs[2]; /**< Reserved for future fields */
> };
>
> diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
> index bb38d7f581..cea25c87f7 100644
> --- a/lib/mbuf/rte_mbuf_core.h
> +++ b/lib/mbuf/rte_mbuf_core.h
> @@ -200,10 +200,11 @@ extern "C" {
> #define PKT_RX_OUTER_L4_CKSUM_BAD (1ULL << 21)
> #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL << 22))
> +#define PKT_RX_REASSEMBLY_INCOMPLETE (1ULL << 23)
>
Similar comment with Andrew's, what is the expectation from application if this
flag exists? Can we drop it to simplify the logic in the application?
> /* add new RX flags here, don't forget to update PKT_FIRST_FREE */
>
> -#define PKT_FIRST_FREE (1ULL << 23)
> +#define PKT_FIRST_FREE (1ULL << 24)
> #define PKT_LAST_FREE (1ULL << 40)
>
> /* add new TX flags here, don't forget to update PKT_LAST_FREE */
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 88d31de0a6..364eeb5cd4 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -181,6 +181,16 @@ struct rte_security_ipsec_sa_options {
> * * 0: Disable per session security statistics collection for this SA.
> */
> uint32_t stats : 1;
> +
> + /** Enable reassembly on incoming packets.
> + *
> + * * 1: Enable driver to try reassembly of encrypted IP packets for
> + * this SA, if supported by the driver. This feature will work
> + * only if rx_offload DEV_RX_OFFLOAD_REASSEMBLY is set in
> + * inline ethernet device.
> + * * 0: Disable reassembly of packets (default).
> + */
> + uint32_t reass_en : 1;
> };
>
> /** IPSec security association direction */
>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v3] eventdev: update crypto adapter metadata structures
2021-08-31 7:56 6% ` [dpdk-dev] [PATCH v3] " Shijith Thotton
@ 2021-09-07 8:34 0% ` Jerin Jacob
2021-09-07 8:53 0% ` Gujjar, Abhinandan S
1 sibling, 0 replies; 200+ results
From: Jerin Jacob @ 2021-09-07 8:34 UTC (permalink / raw)
To: Shijith Thotton
Cc: dpdk-dev, Jerin Jacob, Anoob Joseph, Pavan Nikhilesh,
Akhil Goyal, Abhinandan Gujjar, Ray Kinsella, Ankur Dwivedi
On Tue, Aug 31, 2021 at 1:27 PM Shijith Thotton <sthotton@marvell.com> wrote:
>
> In crypto adapter metadata, reserved bytes in request info structure is
> a space holder for response info. It enforces an order of operation if
> the structures are updated using memcpy to avoid overwriting response
> info. It is logical to move the reserved space out of request info. It
> also solves the ordering issue mentioned before.
>
> This patch removes the reserve field from request info and makes event
> crypto metadata type to structure from union to make space for response
> info.
>
> App and drivers are updated as per metadata change.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> Acked-by: Anoob Joseph <anoobj@marvell.com>
@Gujjar, Abhinandan S @Akhil Goyal
Could you review the crypto adapter-related change?
> ---
> v3:
> * Updated ABI section of release notes.
>
> v2:
> * Updated deprecation notice.
>
> v1:
> * Rebased.
>
> app/test/test_event_crypto_adapter.c | 14 +++++++-------
> doc/guides/rel_notes/deprecation.rst | 6 ------
> doc/guides/rel_notes/release_21_11.rst | 2 ++
> drivers/crypto/octeontx/otx_cryptodev_ops.c | 8 ++++----
> drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 4 ++--
> .../event/octeontx2/otx2_evdev_crypto_adptr_tx.h | 4 ++--
> lib/eventdev/rte_event_crypto_adapter.c | 8 ++++----
> lib/eventdev/rte_event_crypto_adapter.h | 15 +++++----------
> 8 files changed, 26 insertions(+), 35 deletions(-)
>
> diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c
> index 3ad20921e2..0d73694d3a 100644
> --- a/app/test/test_event_crypto_adapter.c
> +++ b/app/test/test_event_crypto_adapter.c
> @@ -168,7 +168,7 @@ test_op_forward_mode(uint8_t session_less)
> {
> struct rte_crypto_sym_xform cipher_xform;
> struct rte_cryptodev_sym_session *sess;
> - union rte_event_crypto_metadata m_data;
> + struct rte_event_crypto_metadata m_data;
> struct rte_crypto_sym_op *sym_op;
> struct rte_crypto_op *op;
> struct rte_mbuf *m;
> @@ -368,7 +368,7 @@ test_op_new_mode(uint8_t session_less)
> {
> struct rte_crypto_sym_xform cipher_xform;
> struct rte_cryptodev_sym_session *sess;
> - union rte_event_crypto_metadata m_data;
> + struct rte_event_crypto_metadata m_data;
> struct rte_crypto_sym_op *sym_op;
> struct rte_crypto_op *op;
> struct rte_mbuf *m;
> @@ -406,7 +406,7 @@ test_op_new_mode(uint8_t session_less)
> if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA) {
> /* Fill in private user data information */
> rte_memcpy(&m_data.response_info, &response_info,
> - sizeof(m_data));
> + sizeof(response_info));
> rte_cryptodev_sym_session_set_user_data(sess,
> &m_data, sizeof(m_data));
> }
> @@ -426,7 +426,7 @@ test_op_new_mode(uint8_t session_less)
> op->private_data_offset = len;
> /* Fill in private data information */
> rte_memcpy(&m_data.response_info, &response_info,
> - sizeof(m_data));
> + sizeof(response_info));
> rte_memcpy((uint8_t *)op + len, &m_data, sizeof(m_data));
> }
>
> @@ -519,7 +519,7 @@ configure_cryptodev(void)
> DEFAULT_NUM_XFORMS *
> sizeof(struct rte_crypto_sym_xform) +
> MAXIMUM_IV_LENGTH +
> - sizeof(union rte_event_crypto_metadata),
> + sizeof(struct rte_event_crypto_metadata),
> rte_socket_id());
> if (params.op_mpool == NULL) {
> RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
> @@ -549,12 +549,12 @@ configure_cryptodev(void)
> * to include the session headers & private data
> */
> session_size = rte_cryptodev_sym_get_private_session_size(TEST_CDEV_ID);
> - session_size += sizeof(union rte_event_crypto_metadata);
> + session_size += sizeof(struct rte_event_crypto_metadata);
>
> params.session_mpool = rte_cryptodev_sym_session_pool_create(
> "CRYPTO_ADAPTER_SESSION_MP",
> MAX_NB_SESSIONS, 0, 0,
> - sizeof(union rte_event_crypto_metadata),
> + sizeof(struct rte_event_crypto_metadata),
> SOCKET_ID_ANY);
> TEST_ASSERT_NOT_NULL(params.session_mpool,
> "session mempool allocation failed\n");
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 76a4abfd6b..58ee95c020 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -266,12 +266,6 @@ Deprecation Notices
> values to the function ``rte_event_eth_rx_adapter_queue_add`` using
> the structure ``rte_event_eth_rx_adapter_queue_add``.
>
> -* eventdev: Reserved bytes of ``rte_event_crypto_request`` is a space holder
> - for ``response_info``. Both should be decoupled for better clarity.
> - New space for ``response_info`` can be made by changing
> - ``rte_event_crypto_metadata`` type to structure from union.
> - This change is targeted for DPDK 21.11.
> -
> * metrics: The function ``rte_metrics_init`` will have a non-void return
> in order to notify errors instead of calling ``rte_exit``.
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index d707a554ef..ab76d5dd55 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -100,6 +100,8 @@ ABI Changes
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* eventdev: Modified type of ``union rte_event_crypto_metadata`` to struct and
> + removed reserved bytes from ``struct rte_event_crypto_request``.
>
> Known Issues
> ------------
> diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
> index eac6796cfb..c51be63146 100644
> --- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
> +++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
> @@ -710,17 +710,17 @@ submit_request_to_sso(struct ssows *ws, uintptr_t req,
> ssovf_store_pair(add_work, req, ws->grps[rsp_info->queue_id]);
> }
>
> -static inline union rte_event_crypto_metadata *
> +static inline struct rte_event_crypto_metadata *
> get_event_crypto_mdata(struct rte_crypto_op *op)
> {
> - union rte_event_crypto_metadata *ec_mdata;
> + struct rte_event_crypto_metadata *ec_mdata;
>
> if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
> ec_mdata = rte_cryptodev_sym_session_get_user_data(
> op->sym->session);
> else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
> op->private_data_offset)
> - ec_mdata = (union rte_event_crypto_metadata *)
> + ec_mdata = (struct rte_event_crypto_metadata *)
> ((uint8_t *)op + op->private_data_offset);
> else
> return NULL;
> @@ -731,7 +731,7 @@ get_event_crypto_mdata(struct rte_crypto_op *op)
> uint16_t __rte_hot
> otx_crypto_adapter_enqueue(void *port, struct rte_crypto_op *op)
> {
> - union rte_event_crypto_metadata *ec_mdata;
> + struct rte_event_crypto_metadata *ec_mdata;
> struct cpt_instance *instance;
> struct cpt_request_info *req;
> struct rte_event *rsp_info;
> diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> index 42100154cd..952d1352f4 100644
> --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> @@ -453,7 +453,7 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp,
> struct rte_crypto_op *op,
> uint64_t cpt_inst_w7)
> {
> - union rte_event_crypto_metadata *m_data;
> + struct rte_event_crypto_metadata *m_data;
> union cpt_inst_s inst;
> uint64_t lmt_status;
>
> @@ -468,7 +468,7 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp,
> }
> } else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
> op->private_data_offset) {
> - m_data = (union rte_event_crypto_metadata *)
> + m_data = (struct rte_event_crypto_metadata *)
> ((uint8_t *)op +
> op->private_data_offset);
> } else {
> diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> index ecf7eb9f56..458e8306d7 100644
> --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> @@ -16,7 +16,7 @@
> static inline uint16_t
> otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev)
> {
> - union rte_event_crypto_metadata *m_data;
> + struct rte_event_crypto_metadata *m_data;
> struct rte_crypto_op *crypto_op;
> struct rte_cryptodev *cdev;
> struct otx2_cpt_qp *qp;
> @@ -37,7 +37,7 @@ otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev)
> qp_id = m_data->request_info.queue_pair_id;
> } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
> crypto_op->private_data_offset) {
> - m_data = (union rte_event_crypto_metadata *)
> + m_data = (struct rte_event_crypto_metadata *)
> ((uint8_t *)crypto_op +
> crypto_op->private_data_offset);
> cdev_id = m_data->request_info.cdev_id;
> diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c
> index e1d38d383d..6977391ae9 100644
> --- a/lib/eventdev/rte_event_crypto_adapter.c
> +++ b/lib/eventdev/rte_event_crypto_adapter.c
> @@ -333,7 +333,7 @@ eca_enq_to_cryptodev(struct rte_event_crypto_adapter *adapter,
> struct rte_event *ev, unsigned int cnt)
> {
> struct rte_event_crypto_adapter_stats *stats = &adapter->crypto_stats;
> - union rte_event_crypto_metadata *m_data = NULL;
> + struct rte_event_crypto_metadata *m_data = NULL;
> struct crypto_queue_pair_info *qp_info = NULL;
> struct rte_crypto_op *crypto_op;
> unsigned int i, n;
> @@ -371,7 +371,7 @@ eca_enq_to_cryptodev(struct rte_event_crypto_adapter *adapter,
> len++;
> } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
> crypto_op->private_data_offset) {
> - m_data = (union rte_event_crypto_metadata *)
> + m_data = (struct rte_event_crypto_metadata *)
> ((uint8_t *)crypto_op +
> crypto_op->private_data_offset);
> cdev_id = m_data->request_info.cdev_id;
> @@ -504,7 +504,7 @@ eca_ops_enqueue_burst(struct rte_event_crypto_adapter *adapter,
> struct rte_crypto_op **ops, uint16_t num)
> {
> struct rte_event_crypto_adapter_stats *stats = &adapter->crypto_stats;
> - union rte_event_crypto_metadata *m_data = NULL;
> + struct rte_event_crypto_metadata *m_data = NULL;
> uint8_t event_dev_id = adapter->eventdev_id;
> uint8_t event_port_id = adapter->event_port_id;
> struct rte_event events[BATCH_SIZE];
> @@ -523,7 +523,7 @@ eca_ops_enqueue_burst(struct rte_event_crypto_adapter *adapter,
> ops[i]->sym->session);
> } else if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
> ops[i]->private_data_offset) {
> - m_data = (union rte_event_crypto_metadata *)
> + m_data = (struct rte_event_crypto_metadata *)
> ((uint8_t *)ops[i] +
> ops[i]->private_data_offset);
> }
> diff --git a/lib/eventdev/rte_event_crypto_adapter.h b/lib/eventdev/rte_event_crypto_adapter.h
> index f8c6cca87c..3c24d9d9df 100644
> --- a/lib/eventdev/rte_event_crypto_adapter.h
> +++ b/lib/eventdev/rte_event_crypto_adapter.h
> @@ -200,11 +200,6 @@ enum rte_event_crypto_adapter_mode {
> * provide event request information to the adapter.
> */
> struct rte_event_crypto_request {
> - uint8_t resv[8];
> - /**< Overlaps with first 8 bytes of struct rte_event
> - * that encode the response event information. Application
> - * is expected to fill in struct rte_event response_info.
> - */
> uint16_t cdev_id;
> /**< cryptodev ID to be used */
> uint16_t queue_pair_id;
> @@ -223,16 +218,16 @@ struct rte_event_crypto_request {
> * operation. If the transfer is done by SW, event response information
> * will be used by the adapter.
> */
> -union rte_event_crypto_metadata {
> - struct rte_event_crypto_request request_info;
> - /**< Request information to be filled in by application
> - * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
> - */
> +struct rte_event_crypto_metadata {
> struct rte_event response_info;
> /**< Response information to be filled in by application
> * for RTE_EVENT_CRYPTO_ADAPTER_OP_NEW and
> * RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
> */
> + struct rte_event_crypto_request request_info;
> + /**< Request information to be filled in by application
> + * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
> + */
> };
>
> /**
> --
> 2.25.1
>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [RFC PATCH v5 0/5] Add PIE support for HQoS library
@ 2021-09-07 7:33 3% ` Liguzinski, WojciechX
2021-09-07 14:11 3% ` [dpdk-dev] [RFC PATCH v6 0/5] Add PIE support for HQoS library Liguzinski, WojciechX
0 siblings, 2 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-09-07 7:33 UTC (permalink / raw)
To: dev, jasvinder.singh, cristian.dumitrescu; +Cc: megha.ajmera
DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management (which is designed
to control the queue length but it does not control latency directly and is now being
obsoleted). However, more advanced queue management is required to address this problem
and provide desirable quality of service to users.
This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.
The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.
Liguzinski, WojciechX (3):
sched: add PIE based congestion management
example/qos_sched: add PIE support
example/ip_pipeline: add PIE support
doc/guides/prog_guide: added PIE
app/test: add tests for PIE
app/test/autotest_data.py | 18 +
app/test/meson.build | 4 +
app/test/test_pie.c | 1076 ++++++++++++++++++
config/rte_config.h | 1 -
doc/guides/prog_guide/glossary.rst | 3 +
doc/guides/prog_guide/qos_framework.rst | 60 +-
doc/guides/prog_guide/traffic_management.rst | 13 +-
drivers/net/softnic/rte_eth_softnic_tm.c | 6 +-
examples/ip_pipeline/tmgr.c | 6 +-
examples/qos_sched/app_thread.c | 1 -
examples/qos_sched/cfg_file.c | 82 +-
examples/qos_sched/init.c | 7 +-
examples/qos_sched/profile.cfg | 196 ++--
lib/sched/meson.build | 10 +-
lib/sched/rte_pie.c | 86 ++
lib/sched/rte_pie.h | 398 +++++++
lib/sched/rte_sched.c | 228 ++--
lib/sched/rte_sched.h | 53 +-
lib/sched/version.map | 3 +
19 files changed, 2061 insertions(+), 190 deletions(-)
create mode 100644 app/test/test_pie.c
create mode 100644 lib/sched/rte_pie.c
create mode 100644 lib/sched/rte_pie.h
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v4 4/4] doc: remove deprecation notice for security fast path change
@ 2021-09-06 18:57 3% ` Akhil Goyal
0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-09-06 18:57 UTC (permalink / raw)
To: Nithin Kumar Dabilpuram, dev
Cc: Jerin Jacob Kollanukkaran, hemant.agrawal, thomas, g.singh,
ferruh.yigit, roy.fan.zhang, olivier.matz, declan.doherty,
radu.nicolau, jiawenwu, konstantin.ananyev,
Nithin Kumar Dabilpuram
> Remove deprecation notice for security fast path change.
>
> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 4 ----
> 1 file changed, 4 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index 76a4abf..2e38a36 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -279,10 +279,6 @@ Deprecation Notices
> content. On Linux and FreeBSD, supported prior to DPDK 20.11,
> original structure will be kept until DPDK 21.11.
>
> -* security: The functions ``rte_security_set_pkt_metadata`` and
> - ``rte_security_get_userdata`` will be made inline functions and additional
> - flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
> -
Corresponding update in release notes is also needed when a deprecation notice is removed.
You can update in the ABI/API changes section of the release notes.
Also squash this patch in the 2/4 patch. Doc changes should be part of the patch.
Regards
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [RFC 0/7] hide eth dev related structures
2021-08-26 12:37 3% ` [dpdk-dev] [RFC 0/7] hide eth dev related structures Jerin Jacob
@ 2021-09-06 18:09 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-09-06 18:09 UTC (permalink / raw)
To: Jerin Jacob, Konstantin Ananyev
Cc: dpdk-dev, Thomas Monjalon, Andrew Rybchenko, Qiming Yang,
Qi Zhang, Beilei Xing, techboard
On 8/26/2021 1:37 PM, Jerin Jacob wrote:
> On Fri, Aug 20, 2021 at 9:59 PM Konstantin Ananyev
> <konstantin.ananyev@intel.com> wrote:
>>
>> NOTE: This is just an RFC to start further discussion and collect the feedback.
>> Due to significant amount of work, changes required are applied only to two
>> PMDs so far: net/i40e and net/ice.
>> So to build it you'll need to add:
>> -Denable_drivers='common/*,mempool/*,net/ice,net/i40e'
>> to your config options.
>
>>
>> That approach was selected to avoid(/minimize) possible performance losses.
>>
>> So far I done only limited amount functional and performance testing.
>> Didn't spot any functional problems, and performance numbers
>> remains the same before and after the patch on my box (testpmd, macswap fwd).
>
>
> Based on testing on octeonxt2. We see some regression in testpmd and
> bit on l3fwd too.
>
> Without patch: 73.5mpps/core in testpmd iofwd
> With out patch: 72 5mpps/core in testpmd iofwd
>
So roughly 1.3% drop.
> Based on my understanding it is due to additional indirection.
>
What is the additional indirection? I can see all dereferences area same.
> My suggestion to fix the problem by:
> Removing the additional `data` redirection and pull callback function
> pointers back
> and keep rest as opaque as done in the existing patch like [1]
>
> I don't believe this has any real implication on future ABI stability
> as we will not be adding
> any new item in rte_eth_fp in any way as new features can be added in slowpath
> rte_eth_dev as mentioned in the patch.
>
> [2] is the patch of doing the same as I don't see any performance
> regression after [2].
>
Only there is an additional check after Konstantin's patch (in both
'rte_eth_rx_burst()' & 'rte_eth_tx_burst()'):
"
if (port_id >= RTE_MAX_ETHPORTS)
return -EINVAL;
"
Which I can see removed again in your patch [2] which fixes the regression. I
wonder if this can be reason of restoring the performance drop, can you please
test original patch by just removing the 'RTE_MAX_ETHPORTS' check?
Thanks,
ferruh
>
> [1]
> - struct rte_eth_burst_api {
> - struct rte_eth_fp {
> + void *data;
> rte_eth_rx_burst_t rx_pkt_burst;
> /**< PMD receive function. */
> rte_eth_tx_burst_t tx_pkt_burst;
> @@ -85,8 +100,19 @@ struct rte_eth_burst_api {
> /**< Check the status of a Rx descriptor. */
> rte_eth_tx_descriptor_status_t tx_descriptor_status;
> /**< Check the status of a Tx descriptor. */
> + /**
> + * User-supplied functions called from rx_burst to post-process
> + * received packets before passing them to the user
> + */
> + struct rte_eth_rxtx_callback
> + *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
> + /**
> + * User-supplied functions called from tx_burst to pre-process
> + * received packets before passing them to the driver for transmission.
> + */
> + struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
> uintptr_t reserved[2];
> -} __rte_cache_min_aligned;
> +} __rte_cache_aligned;
>
> [2]
> https://pastebin.com/CuqkrCW4
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC PATCH v2] raw/ptdma: introduce ptdma driver
@ 2021-09-06 17:17 3% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-09-06 17:17 UTC (permalink / raw)
To: Selwin Sebastian; +Cc: dev, Thomas Monjalon
On Mon, Sep 6, 2021 at 6:56 PM Selwin Sebastian
<selwin.sebastian@amd.com> wrote:
>
> Add support for PTDMA driver
- This description is rather short.
Can this new driver be implemented as a dmadev?
See (current revision):
https://patchwork.dpdk.org/project/dpdk/list/?series=18677&state=%2A&archive=both
- In any case, quick comments on this patch:
Please update release notes.
vfio-pci should be preferred over igb_uio.
Please check indent in meson.
ABI version is incorrect in version.map.
RTE_LOG_REGISTER_DEFAULT should be preferred.
The patch is monolithic, could it be split per functionnality to ease review?
Copy relevant maintainers and/or (sub-)tree maintainers to make them
aware of this work, and get those patches reviewed.
Please submit new revisions of patchsets with increased revision
number in title + changelog that helps track what changed between
revisions.
Some of those points are described in:
https://doc.dpdk.org/guides/contributing/patches.html
Thanks.
--
David Marchand
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 1/3] security: support user specified IV
@ 2021-09-06 14:58 4% ` Anoob Joseph
2021-09-07 16:17 3% ` [dpdk-dev] [PATCH v3 0/3] Add user specified IV with lookaside IPsec Anoob Joseph
1 sibling, 0 replies; 200+ results
From: Anoob Joseph @ 2021-09-06 14:58 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty, Fan Zhang, Konstantin Ananyev
Cc: Anoob Joseph, Jerin Jacob, Archana Muniganti, Tejasree Kondoj,
Hemant Agrawal, Radu Nicolau, Ciara Power, Gagandeep Singh, dev
Enable user to provide IV to be used per security operation. This
would be used with lookaside protocol offload for comparing
against known vectors.
By default, PMD would generate IV internally and would be random.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
---
doc/guides/rel_notes/release_21_11.rst | 5 +++++
lib/security/rte_security.h | 14 ++++++++++++++
2 files changed, 19 insertions(+)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 83da727..a1813bd 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -105,6 +105,11 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* security: add IPsec SA option to disable IV generation
+
+ * Added IPsec SA option to disable IV generation to allow known vector
+ tests as well as usage of application provided IV on supported PMDs.
+
ABI Changes
-----------
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 88d31de..b4b6776 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -181,6 +181,20 @@ struct rte_security_ipsec_sa_options {
* * 0: Disable per session security statistics collection for this SA.
*/
uint32_t stats : 1;
+
+ /** Disable IV generation in PMD
+ *
+ * * 1: Disable IV generation in PMD. When disabled, IV provided in
+ * rte_crypto_op will be used by the PMD.
+ *
+ * * 0: Enable IV generation in PMD. When enabled, PMD generated random
+ * value would be used and application is not required to provide
+ * IV.
+ *
+ * Note: For inline cases, IV generation would always need to be handled
+ * by the PMD.
+ */
+ uint32_t iv_gen_disable : 1;
};
/** IPSec security association direction */
--
2.7.4
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] doc: announce change in vfio dma mapping
2021-09-06 8:51 0% ` Ding, Xuan
@ 2021-09-06 13:43 0% ` Ferruh Yigit
2021-09-07 15:21 0% ` Burakov, Anatoly
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-09-06 13:43 UTC (permalink / raw)
To: Ding, Xuan, Kinsella, Ray, Burakov, Anatoly, dev
Cc: maxime.coquelin, Xia, Chenbo, Hu, Jiayu, Richardson, Bruce,
Thomas Monjalon
On 9/6/2021 9:51 AM, Ding, Xuan wrote:
> Hi,
>
>> -----Original Message-----
>> From: Kinsella, Ray <mdr@ashroe.eu>
>> Sent: Friday, September 3, 2021 12:13 AM
>> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Burakov, Anatoly
>> <anatoly.burakov@intel.com>; Ding, Xuan <xuan.ding@intel.com>;
>> dev@dpdk.org
>> Cc: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>; Hu,
>> Jiayu <jiayu.hu@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
>> Thomas Monjalon <thomas@monjalon.net>
>> Subject: Re: [PATCH] doc: announce change in vfio dma mapping
>>
>>
>>
>> On 02/09/2021 10:50, Ferruh Yigit wrote:
>>> On 9/1/2021 2:25 PM, Burakov, Anatoly wrote:
>>>> On 01-Sep-21 12:42 PM, Ferruh Yigit wrote:
>>>>> On 9/1/2021 12:01 PM, Burakov, Anatoly wrote:
>>>>>> On 01-Sep-21 10:56 AM, Ferruh Yigit wrote:
>>>>>>> On 9/1/2021 2:41 AM, Ding, Xuan wrote:
>>>>>>>> Hi Ferruh,
>>>>>>>>
>>>>>>>>> -----Original Message-----
>>>>>>>>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>>>>>>>>> Sent: Wednesday, September 1, 2021 12:01 AM
>>>>>>>>> To: Ding, Xuan <xuan.ding@intel.com>; dev@dpdk.org; Burakov,
>> Anatoly
>>>>>>>>> <anatoly.burakov@intel.com>
>>>>>>>>> Cc: maxime.coquelin@redhat.com; Xia, Chenbo
>> <chenbo.xia@intel.com>; Hu,
>>>>>>>>> Jiayu <jiayu.hu@intel.com>; Richardson, Bruce
>> <bruce.richardson@intel.com>
>>>>>>>>> Subject: Re: [PATCH] doc: announce change in vfio dma mapping
>>>>>>>>>
>>>>>>>>> On 8/31/2021 2:10 PM, Xuan Ding wrote:
>>>>>>>>>> Currently, the VFIO subsystem will compact adjacent DMA regions for
>> the
>>>>>>>>>> purposes of saving space in the internal list of mappings. This has a
>>>>>>>>>> side effect of compacting two separate mappings that just happen to
>> be
>>>>>>>>>> adjacent in memory. Since VFIO implementation on IA platforms also
>> does
>>>>>>>>>> not allow partial unmapping of memory mapped for DMA, the current
>>>>>>>>> DPDK
>>>>>>>>>> VFIO implementation will prevent unmapping of accidentally adjacent
>>>>>>>>>> maps even though it could have been unmapped [1].
>>>>>>>>>>
>>>>>>>>>> The proper fix for this issue is to change the VFIO DMA mapping API to
>>>>>>>>>> also include page size, and always map memory page-by-page.
>>>>>>>>>>
>>>>>>>>>> [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
>>>>>>>>>> ---
>>>>>>>>>> doc/guides/rel_notes/deprecation.rst | 3 +++
>>>>>>>>>> 1 file changed, 3 insertions(+)
>>>>>>>>>>
>>>>>>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>>>>>>> index 76a4abfd6b..1234420caf 100644
>>>>>>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>>>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>>>>>>> @@ -287,3 +287,6 @@ Deprecation Notices
>>>>>>>>>> reserved bytes to 2 (from 3), and use 1 byte to indicate warnings
>> and
>>>>>>>>> other
>>>>>>>>>> information from the crypto/security operation. This field will be
>>>>>>>>>> used to
>>>>>>>>>> communicate events such as soft expiry with IPsec in lookaside
>> mode.
>>>>>>>>>> +
>>>>>>>>>> +* vfio: the functions `rte_vfio_container_dma_map` will be amended
>> to
>>>>>>>>>> + include page size. This change is targeted for DPDK 22.02.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Is this means adding a new parameter to API?
>>>>>>>>> If so this is an ABI/API break and we can't do this change in the 22.02.
>>>>>>>>
>>>>>>>> Our original plan is add a new parameter in order not to use a new
>> function
>>>>>>>> name, so you mean, any changes to the API can only be done in the LTS
>> version?
>>>>>>>> If so, we can only add a new API and retire the old one in 22.11.
>>>>>>>>
>>>>>>>
>>>>>>> We can add a new API anytime. Adding new parameter to an existing API
>> can be
>>>>>>> done on the ABI break release.
>>>>>>
>>>>>> So, 22.11 then?
>>>>>>
>>>>>
>>>>> Yes.
>>>>>
>>>>>>>
>>>>>>> You can add the new API in this release, and start using it.
>>>>>>> And mark the old API as deprecated in this release. This lets existing
>> binaries
>>>>>>> to keep using it, but app needs to switch to new API for compilation.
>>>>>>> Old API can be removed on 22.11, and you will need a deprecation notice
>> before
>>>>>>> 22.11 for it.
>>>>>>>
>>>>>>> Is above plan works for you?
>>>>>>>
>>>>>>
>>>>>> We have slightly rethought our approach, and the functionality that Xuan
>>>>>> requires does not rely on this API. They can, for all intents and purposes, be
>>>>>> considered unrelated issues.
>>>>>>
>>>>>> I still think it's a good idea to update the API that way, so I would like to do
>>>>>> that, and if we have to wait until 22.11 to fix it, I'm OK with that. Since
>>>>>> there no longer is any urgency here, it's acceptable to wait for the next LTS
>> to
>>>>>> break it.
>>>>>>
>>>>>
>>>>> Got it.
>>>>>
>>>>> As far as I understand, main motivation in techboard decision was to
>> prevent the
>>>>> ABI break as much as possible (main reason of decision wasn't deprecation
>> notice
>>>>> being late). But if the correct thing to do is to rename the API (and break the
>>>>> ABI), I don't see the benefit to wait one more year, it is just delaying the
>>>>> impact and adding overhead to us.
>>>>> I am for being pragmatic and doing the change in this release if API rename
>> is
>>>>> better option, perhaps we can visit the issue again in techboard.
>>>>>
>>>>> Can you please describe why renaming API is better option, comparing to
>> adding
>>>>> new API with new parameter?
>>>>
>>>> I take it you meant "why renaming API *isn't* a better option".
>>>>
>>>> The problem we're solving is that the API in question does not know about
>> page
>>>> sizes and thus can't map segments page-by-page. I mean I /guess/ we could
>> have
>>>> two API's (one paged, one not paged), but then we get into all kinds of hairy
>>>> things about the API leaking the details of underlying platform.
>>>>
>>>> Bottom line: i like current API function name. It's concise, it's descriptive.
>>>> It's only missing a parameter, which i would like to add. A rename has been
>>>> suggested (deprecate old API, add new API with a different name, and with
>> added
>>>> parameter), but honestly, I don't see why we have to do that because this is
>>>> predicated upon the assumption that we *can't* break ABI at all, under any
>>>> circumstances.
>>>>
>>>> Can you please explain to me what is wrong with keeping a versioned symbol?
>>>> Like, keep the old function around to keep ABI compatibility, but break the
>> API
>>>> compatibility for those who target 22.02 or later? That's what symbol
>> versioning
>>>> is *for*, is it not?
>>>>
>>>
>>> Nothing wrong with symbol versioning, indeed that is preferred way if it works
>>> for you, I didn't get that symbol versioning is planned.
>>>
>>> @Ray,
>>> Since symbol versioning is planned, ABI won't break, but API will change, can
>>> this change be done in this release without deprecation notice?
>>
>> Yes - I would think so.
>> Since we are going to the effort of using symbol versioning nothing is being
>> depreciated as such (yet).
>>
>>> Later we can have a deprecation notice to remove old symbol on 22.11.
>
> Thanks for your explanation.
> @Yigit, Ferruh Does it mean that we can do API change in 21.11? If so, we will
> follow the process and target API change in this release. :)
>
With symbol versioning, yes you can make the change in this release.
You can send another deprecation notice to remove the old symbol for 22.11, that
can be sent anytime until 22.08 released.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v20 0/7] support dmadev
2021-09-04 10:10 3% ` [dpdk-dev] [PATCH v20 0/7] support dmadev Chengwen Feng
@ 2021-09-06 13:37 0% ` Bruce Richardson
0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2021-09-06 13:37 UTC (permalink / raw)
To: Chengwen Feng
Cc: thomas, ferruh.yigit, jerinj, jerinjacobk, andrew.rybchenko, dev,
mb, nipun.gupta, hemant.agrawal, maxime.coquelin,
honnappa.nagarahalli, david.marchand, sburla, pkapoor,
konstantin.ananyev, conor.walsh, kevin.laatz
On Sat, Sep 04, 2021 at 06:10:20PM +0800, Chengwen Feng wrote:
> This patch set contains seven patch for new add dmadev.
>
> Chengwen Feng (7):
> dmadev: introduce DMA device library public APIs
> dmadev: introduce DMA device library internal header
> dmadev: introduce DMA device library PMD header
> dmadev: introduce DMA device library implementation
> doc: add DMA device library guide
> dma/skeleton: introduce skeleton dmadev driver
> app/test: add dmadev API test
>
> ---
> v20:
> * delete unnecessary and duplicate include header files.
> * the conf_sz parameter is added to the configure and vchan-setup
> callbacks of the PMD, this is mainly used to enhance ABI
> compatibility.
> * the rte_dmadev structure field is rearranged to reserve more space
> for I/O functions.
> * fix some ambiguous and unnecessary comments.
> * fix the potential memory leak of ut.
> * redefine skeldma_init_once to skeldma_count.
> * suppress rte_dmadev error output when execute ut.
Thanks for V20, those I've checked look like some good changes.
/Bruce
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] doc: announce change in vfio dma mapping
2021-09-02 16:13 0% ` Kinsella, Ray
@ 2021-09-06 8:51 0% ` Ding, Xuan
2021-09-06 13:43 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Ding, Xuan @ 2021-09-06 8:51 UTC (permalink / raw)
To: Kinsella, Ray, Yigit, Ferruh, Burakov, Anatoly, dev
Cc: maxime.coquelin, Xia, Chenbo, Hu, Jiayu, Richardson, Bruce,
Thomas Monjalon
Hi,
> -----Original Message-----
> From: Kinsella, Ray <mdr@ashroe.eu>
> Sent: Friday, September 3, 2021 12:13 AM
> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Burakov, Anatoly
> <anatoly.burakov@intel.com>; Ding, Xuan <xuan.ding@intel.com>;
> dev@dpdk.org
> Cc: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>; Hu,
> Jiayu <jiayu.hu@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
> Thomas Monjalon <thomas@monjalon.net>
> Subject: Re: [PATCH] doc: announce change in vfio dma mapping
>
>
>
> On 02/09/2021 10:50, Ferruh Yigit wrote:
> > On 9/1/2021 2:25 PM, Burakov, Anatoly wrote:
> >> On 01-Sep-21 12:42 PM, Ferruh Yigit wrote:
> >>> On 9/1/2021 12:01 PM, Burakov, Anatoly wrote:
> >>>> On 01-Sep-21 10:56 AM, Ferruh Yigit wrote:
> >>>>> On 9/1/2021 2:41 AM, Ding, Xuan wrote:
> >>>>>> Hi Ferruh,
> >>>>>>
> >>>>>>> -----Original Message-----
> >>>>>>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> >>>>>>> Sent: Wednesday, September 1, 2021 12:01 AM
> >>>>>>> To: Ding, Xuan <xuan.ding@intel.com>; dev@dpdk.org; Burakov,
> Anatoly
> >>>>>>> <anatoly.burakov@intel.com>
> >>>>>>> Cc: maxime.coquelin@redhat.com; Xia, Chenbo
> <chenbo.xia@intel.com>; Hu,
> >>>>>>> Jiayu <jiayu.hu@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>
> >>>>>>> Subject: Re: [PATCH] doc: announce change in vfio dma mapping
> >>>>>>>
> >>>>>>> On 8/31/2021 2:10 PM, Xuan Ding wrote:
> >>>>>>>> Currently, the VFIO subsystem will compact adjacent DMA regions for
> the
> >>>>>>>> purposes of saving space in the internal list of mappings. This has a
> >>>>>>>> side effect of compacting two separate mappings that just happen to
> be
> >>>>>>>> adjacent in memory. Since VFIO implementation on IA platforms also
> does
> >>>>>>>> not allow partial unmapping of memory mapped for DMA, the current
> >>>>>>> DPDK
> >>>>>>>> VFIO implementation will prevent unmapping of accidentally adjacent
> >>>>>>>> maps even though it could have been unmapped [1].
> >>>>>>>>
> >>>>>>>> The proper fix for this issue is to change the VFIO DMA mapping API to
> >>>>>>>> also include page size, and always map memory page-by-page.
> >>>>>>>>
> >>>>>>>> [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
> >>>>>>>>
> >>>>>>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> >>>>>>>> ---
> >>>>>>>> doc/guides/rel_notes/deprecation.rst | 3 +++
> >>>>>>>> 1 file changed, 3 insertions(+)
> >>>>>>>>
> >>>>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
> >>>>>>> b/doc/guides/rel_notes/deprecation.rst
> >>>>>>>> index 76a4abfd6b..1234420caf 100644
> >>>>>>>> --- a/doc/guides/rel_notes/deprecation.rst
> >>>>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
> >>>>>>>> @@ -287,3 +287,6 @@ Deprecation Notices
> >>>>>>>> reserved bytes to 2 (from 3), and use 1 byte to indicate warnings
> and
> >>>>>>> other
> >>>>>>>> information from the crypto/security operation. This field will be
> >>>>>>>> used to
> >>>>>>>> communicate events such as soft expiry with IPsec in lookaside
> mode.
> >>>>>>>> +
> >>>>>>>> +* vfio: the functions `rte_vfio_container_dma_map` will be amended
> to
> >>>>>>>> + include page size. This change is targeted for DPDK 22.02.
> >>>>>>>>
> >>>>>>>
> >>>>>>> Is this means adding a new parameter to API?
> >>>>>>> If so this is an ABI/API break and we can't do this change in the 22.02.
> >>>>>>
> >>>>>> Our original plan is add a new parameter in order not to use a new
> function
> >>>>>> name, so you mean, any changes to the API can only be done in the LTS
> version?
> >>>>>> If so, we can only add a new API and retire the old one in 22.11.
> >>>>>>
> >>>>>
> >>>>> We can add a new API anytime. Adding new parameter to an existing API
> can be
> >>>>> done on the ABI break release.
> >>>>
> >>>> So, 22.11 then?
> >>>>
> >>>
> >>> Yes.
> >>>
> >>>>>
> >>>>> You can add the new API in this release, and start using it.
> >>>>> And mark the old API as deprecated in this release. This lets existing
> binaries
> >>>>> to keep using it, but app needs to switch to new API for compilation.
> >>>>> Old API can be removed on 22.11, and you will need a deprecation notice
> before
> >>>>> 22.11 for it.
> >>>>>
> >>>>> Is above plan works for you?
> >>>>>
> >>>>
> >>>> We have slightly rethought our approach, and the functionality that Xuan
> >>>> requires does not rely on this API. They can, for all intents and purposes, be
> >>>> considered unrelated issues.
> >>>>
> >>>> I still think it's a good idea to update the API that way, so I would like to do
> >>>> that, and if we have to wait until 22.11 to fix it, I'm OK with that. Since
> >>>> there no longer is any urgency here, it's acceptable to wait for the next LTS
> to
> >>>> break it.
> >>>>
> >>>
> >>> Got it.
> >>>
> >>> As far as I understand, main motivation in techboard decision was to
> prevent the
> >>> ABI break as much as possible (main reason of decision wasn't deprecation
> notice
> >>> being late). But if the correct thing to do is to rename the API (and break the
> >>> ABI), I don't see the benefit to wait one more year, it is just delaying the
> >>> impact and adding overhead to us.
> >>> I am for being pragmatic and doing the change in this release if API rename
> is
> >>> better option, perhaps we can visit the issue again in techboard.
> >>>
> >>> Can you please describe why renaming API is better option, comparing to
> adding
> >>> new API with new parameter?
> >>
> >> I take it you meant "why renaming API *isn't* a better option".
> >>
> >> The problem we're solving is that the API in question does not know about
> page
> >> sizes and thus can't map segments page-by-page. I mean I /guess/ we could
> have
> >> two API's (one paged, one not paged), but then we get into all kinds of hairy
> >> things about the API leaking the details of underlying platform.
> >>
> >> Bottom line: i like current API function name. It's concise, it's descriptive.
> >> It's only missing a parameter, which i would like to add. A rename has been
> >> suggested (deprecate old API, add new API with a different name, and with
> added
> >> parameter), but honestly, I don't see why we have to do that because this is
> >> predicated upon the assumption that we *can't* break ABI at all, under any
> >> circumstances.
> >>
> >> Can you please explain to me what is wrong with keeping a versioned symbol?
> >> Like, keep the old function around to keep ABI compatibility, but break the
> API
> >> compatibility for those who target 22.02 or later? That's what symbol
> versioning
> >> is *for*, is it not?
> >>
> >
> > Nothing wrong with symbol versioning, indeed that is preferred way if it works
> > for you, I didn't get that symbol versioning is planned.
> >
> > @Ray,
> > Since symbol versioning is planned, ABI won't break, but API will change, can
> > this change be done in this release without deprecation notice?
>
> Yes - I would think so.
> Since we are going to the effort of using symbol versioning nothing is being
> depreciated as such (yet).
>
> > Later we can have a deprecation notice to remove old symbol on 22.11.
Thanks for your explanation.
@Yigit, Ferruh Does it mean that we can do API change in 21.11? If so, we will
follow the process and target API change in this release. :)
Regards,
Xuan
> >
> > Thanks,
> > ferruh
> >
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ethdev: promote burst mode API to stable
2021-09-06 5:56 3% ` [dpdk-dev] [PATCH v2] " Haiyue Wang
@ 2021-09-06 6:36 0% ` Andrew Rybchenko
0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-09-06 6:36 UTC (permalink / raw)
To: Haiyue Wang, dev; +Cc: Ferruh Yigit, Ray Kinsella, Thomas Monjalon
On 9/6/21 8:56 AM, Haiyue Wang wrote:
> The DPDK Symbol Bot reports:
> Please note the symbols listed below have expired. In line with the
> DPDK ABI policy, they should be scheduled for removal, in the next
> DPDK release.
>
> Symbol
> rte_eth_rx_burst_mode_get
> rte_eth_tx_burst_mode_get
>
> Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
> Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2] ethdev: promote burst mode API to stable
2021-09-01 5:07 3% [dpdk-dev] [PATCH v1 0/3] Promote some API to stable Haiyue Wang
` (2 preceding siblings ...)
2021-09-01 5:07 3% ` [dpdk-dev] [PATCH v1 3/3] ethdev: promote burst mode " Haiyue Wang
@ 2021-09-06 5:56 3% ` Haiyue Wang
2021-09-06 6:36 0% ` Andrew Rybchenko
3 siblings, 1 reply; 200+ results
From: Haiyue Wang @ 2021-09-06 5:56 UTC (permalink / raw)
To: dev
Cc: Haiyue Wang, Ferruh Yigit, Ray Kinsella, Thomas Monjalon,
Andrew Rybchenko
The DPDK Symbol Bot reports:
Please note the symbols listed below have expired. In line with the
DPDK ABI policy, they should be scheduled for removal, in the next
DPDK release.
Symbol
rte_eth_rx_burst_mode_get
rte_eth_tx_burst_mode_get
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
v2: Drop the PMD API promote to stable
---
lib/ethdev/rte_ethdev.h | 2 --
lib/ethdev/version.map | 4 ++--
2 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index d2b27c351f..3277e8f8fb 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4361,7 +4361,6 @@ int rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
* - -ENOTSUP: routine is not supported by the device PMD.
* - -EINVAL: The queue_id is out of range.
*/
-__rte_experimental
int rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
struct rte_eth_burst_mode *mode);
@@ -4383,7 +4382,6 @@ int rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
* - -ENOTSUP: routine is not supported by the device PMD.
* - -EINVAL: The queue_id is out of range.
*/
-__rte_experimental
int rte_eth_tx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
struct rte_eth_burst_mode *mode);
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 3eece75b72..6a12b0664a 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -87,6 +87,7 @@ DPDK_22 {
rte_eth_promiscuous_get;
rte_eth_remove_rx_callback;
rte_eth_remove_tx_callback;
+ rte_eth_rx_burst_mode_get;
rte_eth_rx_queue_info_get;
rte_eth_rx_queue_setup;
rte_eth_set_queue_rate_limit;
@@ -104,6 +105,7 @@ DPDK_22 {
rte_eth_tx_buffer_drop_callback;
rte_eth_tx_buffer_init;
rte_eth_tx_buffer_set_err_callback;
+ rte_eth_tx_burst_mode_get;
rte_eth_tx_done_cleanup;
rte_eth_tx_queue_info_get;
rte_eth_tx_queue_setup;
@@ -166,9 +168,7 @@ EXPERIMENTAL {
# added in 19.11
rte_eth_dev_hairpin_capability_get;
- rte_eth_rx_burst_mode_get;
rte_eth_rx_hairpin_queue_setup;
- rte_eth_tx_burst_mode_get;
rte_eth_tx_hairpin_queue_setup;
rte_flow_dynf_metadata_offs;
rte_flow_dynf_metadata_mask;
--
2.33.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v20 0/7] support dmadev
@ 2021-09-04 10:10 3% ` Chengwen Feng
2021-09-06 13:37 0% ` Bruce Richardson
2021-09-07 12:56 3% ` [dpdk-dev] [PATCH v21 " Chengwen Feng
1 sibling, 1 reply; 200+ results
From: Chengwen Feng @ 2021-09-04 10:10 UTC (permalink / raw)
To: thomas, ferruh.yigit, bruce.richardson, jerinj, jerinjacobk,
andrew.rybchenko
Cc: dev, mb, nipun.gupta, hemant.agrawal, maxime.coquelin,
honnappa.nagarahalli, david.marchand, sburla, pkapoor,
konstantin.ananyev, conor.walsh, kevin.laatz
This patch set contains seven patch for new add dmadev.
Chengwen Feng (7):
dmadev: introduce DMA device library public APIs
dmadev: introduce DMA device library internal header
dmadev: introduce DMA device library PMD header
dmadev: introduce DMA device library implementation
doc: add DMA device library guide
dma/skeleton: introduce skeleton dmadev driver
app/test: add dmadev API test
---
v20:
* delete unnecessary and duplicate include header files.
* the conf_sz parameter is added to the configure and vchan-setup
callbacks of the PMD, this is mainly used to enhance ABI
compatibility.
* the rte_dmadev structure field is rearranged to reserve more space
for I/O functions.
* fix some ambiguous and unnecessary comments.
* fix the potential memory leak of ut.
* redefine skeldma_init_once to skeldma_count.
* suppress rte_dmadev error output when execute ut.
v19:
* squash maintainer patch to patch #1.
v18:
* RTE_DMA_STATUS_* add BUS_READ/WRITE_ERR, PAGE_FAULT.
* rte_dmadev dataplane API add judge dev_started when debug enable.
* rte_dmadev_start/vchan_setup add judge device configured.
* rte_dmadev_dump support format capability name.
* optimized the comments of rte_dmadev.
* fix skeldma_copy always return zero when enqueue successful.
* log encapsulation macro add newline characters.
* test_dmadev_api support rte_dmadev_dump() ut.
v17:
* remove rte_dmadev_selftest() API.
* move dmadev API test from dma/skeleton to app/test.
* fix compile error of dma/skeleton driver when building for x86-32.
* fix iol spell check warning of dmadev.rst.
MAINTAINERS | 7 +
app/test/meson.build | 4 +
app/test/test_dmadev.c | 43 +
app/test/test_dmadev_api.c | 543 ++++++++++++
config/rte_config.h | 3 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/dmadev.rst | 125 +++
doc/guides/prog_guide/img/dmadev.svg | 283 +++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 5 +
drivers/dma/meson.build | 11 +
drivers/dma/skeleton/meson.build | 7 +
drivers/dma/skeleton/skeleton_dmadev.c | 594 ++++++++++++++
drivers/dma/skeleton/skeleton_dmadev.h | 61 ++
drivers/dma/skeleton/version.map | 3 +
drivers/meson.build | 1 +
lib/dmadev/meson.build | 7 +
lib/dmadev/rte_dmadev.c | 607 ++++++++++++++
lib/dmadev/rte_dmadev.h | 1047 ++++++++++++++++++++++++
lib/dmadev/rte_dmadev_core.h | 183 +++++
lib/dmadev/rte_dmadev_pmd.h | 72 ++
lib/dmadev/version.map | 35 +
lib/meson.build | 1 +
24 files changed, 3645 insertions(+)
create mode 100644 app/test/test_dmadev.c
create mode 100644 app/test/test_dmadev_api.c
create mode 100644 doc/guides/prog_guide/dmadev.rst
create mode 100644 doc/guides/prog_guide/img/dmadev.svg
create mode 100644 drivers/dma/meson.build
create mode 100644 drivers/dma/skeleton/meson.build
create mode 100644 drivers/dma/skeleton/skeleton_dmadev.c
create mode 100644 drivers/dma/skeleton/skeleton_dmadev.h
create mode 100644 drivers/dma/skeleton/version.map
create mode 100644 lib/dmadev/meson.build
create mode 100644 lib/dmadev/rte_dmadev.c
create mode 100644 lib/dmadev/rte_dmadev.h
create mode 100644 lib/dmadev/rte_dmadev_core.h
create mode 100644 lib/dmadev/rte_dmadev_pmd.h
create mode 100644 lib/dmadev/version.map
--
2.33.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 4/5] doc: changes for new pcapng and dumpcap
2021-09-03 22:06 1% ` [dpdk-dev] [PATCH v2 2/5] pdump: support pcapng and filtering Stephen Hemminger
@ 2021-09-03 22:06 1% ` Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-03 22:06 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Describe the new packet capture library and utilities
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
.../howto/img/packet_capture_framework.svg | 96 +++++++++----------
doc/guides/howto/packet_capture_framework.rst | 67 ++++++-------
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/pcapng_lib.rst | 24 +++++
doc/guides/prog_guide/pdump_lib.rst | 28 ++++--
doc/guides/rel_notes/release_21_11.rst | 10 ++
doc/guides/tools/dumpcap.rst | 80 ++++++++++++++++
doc/guides/tools/index.rst | 1 +
10 files changed, 222 insertions(+), 87 deletions(-)
create mode 100644 doc/guides/prog_guide/pcapng_lib.rst
create mode 100644 doc/guides/tools/dumpcap.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 1992107a0356..ee07394d1c78 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -223,3 +223,4 @@ The public API headers are grouped by topics:
[experimental APIs] (@ref rte_compat.h),
[ABI versioning] (@ref rte_function_versioning.h),
[version] (@ref rte_version.h)
+ [pcapng] (@ref rte_pcapng.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 325a0195c6ab..aba17799a9a1 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -58,6 +58,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/metrics \
@TOPDIR@/lib/node \
@TOPDIR@/lib/net \
+ @TOPDIR@/lib/pcapng \
@TOPDIR@/lib/pci \
@TOPDIR@/lib/pdump \
@TOPDIR@/lib/pipeline \
diff --git a/doc/guides/howto/img/packet_capture_framework.svg b/doc/guides/howto/img/packet_capture_framework.svg
index a76baf71fdee..1c2646a81096 100644
--- a/doc/guides/howto/img/packet_capture_framework.svg
+++ b/doc/guides/howto/img/packet_capture_framework.svg
@@ -1,6 +1,4 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
<svg
xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
xmlns:dc="http://purl.org/dc/elements/1.1/"
@@ -16,8 +14,8 @@
viewBox="0 0 425.19685 283.46457"
id="svg2"
version="1.1"
- inkscape:version="0.91 r13725"
- sodipodi:docname="drawing-pcap.svg">
+ inkscape:version="1.0.2 (e86c870879, 2021-01-15)"
+ sodipodi:docname="packet_capture_framework.svg">
<defs
id="defs4">
<marker
@@ -228,7 +226,7 @@
x2="487.64606"
y2="258.38232"
gradientUnits="userSpaceOnUse"
- gradientTransform="translate(-84.916417,744.90779)" />
+ gradientTransform="matrix(1.1457977,0,0,0.99944907,-151.97019,745.05014)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient5784"
@@ -277,17 +275,18 @@
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
- inkscape:zoom="0.57434918"
- inkscape:cx="215.17857"
- inkscape:cy="285.26445"
+ inkscape:zoom="1"
+ inkscape:cx="226.77165"
+ inkscape:cy="78.124511"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
- inkscape:window-width="1874"
- inkscape:window-height="971"
- inkscape:window-x="2"
- inkscape:window-y="24"
- inkscape:window-maximized="0" />
+ inkscape:window-width="2560"
+ inkscape:window-height="1414"
+ inkscape:window-x="0"
+ inkscape:window-y="0"
+ inkscape:window-maximized="1"
+ inkscape:document-rotation="0" />
<metadata
id="metadata7">
<rdf:RDF>
@@ -296,7 +295,7 @@
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
- <dc:title></dc:title>
+ <dc:title />
</cc:Work>
</rdf:RDF>
</metadata>
@@ -321,15 +320,15 @@
y="790.82452" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="61.050636"
y="807.3205"
- id="text4152"
- sodipodi:linespacing="125%"><tspan
+ id="text4152"><tspan
sodipodi:role="line"
id="tspan4154"
x="61.050636"
- y="807.3205">DPDK Primary Application</tspan></text>
+ y="807.3205"
+ style="font-size:12.5px;line-height:1.25">DPDK Primary Application</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6"
@@ -339,19 +338,20 @@
y="827.01843" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="350.68585"
y="841.16058"
- id="text4189"
- sodipodi:linespacing="125%"><tspan
+ id="text4189"><tspan
sodipodi:role="line"
id="tspan4191"
x="350.68585"
- y="841.16058">dpdk-pdump</tspan><tspan
+ y="841.16058"
+ style="font-size:12.5px;line-height:1.25">dpdk-dumpcap</tspan><tspan
sodipodi:role="line"
x="350.68585"
y="856.78558"
- id="tspan4193">tool</tspan></text>
+ id="tspan4193"
+ style="font-size:12.5px;line-height:1.25">tool</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4"
@@ -361,15 +361,15 @@
y="891.16315" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="352.70612"
y="905.3053"
- id="text4189-1"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1"><tspan
sodipodi:role="line"
x="352.70612"
y="905.3053"
- id="tspan4193-3">PCAP PMD</tspan></text>
+ id="tspan4193-3"
+ style="font-size:12.5px;line-height:1.25">librte_pcapng</tspan></text>
<rect
style="fill:url(#linearGradient5745);fill-opacity:1;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-6"
@@ -379,15 +379,15 @@
y="923.9931" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="136.02846"
y="938.13525"
- id="text4189-0"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-0"><tspan
sodipodi:role="line"
x="136.02846"
y="938.13525"
- id="tspan4193-6">dpdk_port0</tspan></text>
+ id="tspan4193-6"
+ style="font-size:12.5px;line-height:1.25">dpdk_port0</tspan></text>
<rect
style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-5"
@@ -397,33 +397,33 @@
y="824.99817" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="137.54369"
y="839.14026"
- id="text4189-4"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-4"><tspan
sodipodi:role="line"
x="137.54369"
y="839.14026"
- id="tspan4193-2">librte_pdump</tspan></text>
+ id="tspan4193-2"
+ style="font-size:12.5px;line-height:1.25">librte_pdump</tspan></text>
<rect
- style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+ style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1.07013;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4-5"
- width="94.449265"
- height="35.355339"
- x="307.7804"
- y="985.61243" />
+ width="108.21974"
+ height="35.335861"
+ x="297.9809"
+ y="985.62219" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="352.70618"
y="999.75458"
- id="text4189-1-8"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1-8"><tspan
sodipodi:role="line"
x="352.70618"
y="999.75458"
- id="tspan4193-3-2">capture.pcap</tspan></text>
+ id="tspan4193-3-2"
+ style="font-size:12.5px;line-height:1.25">capture.pcapng</tspan></text>
<rect
style="fill:url(#linearGradient5788-1);fill-opacity:1;stroke:#257cdc;stroke-width:1.12555885;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="rect4156-6-4-5-1"
@@ -433,15 +433,15 @@
y="983.14984" />
<text
xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="136.53352"
y="1002.785"
- id="text4189-1-8-4"
- sodipodi:linespacing="125%"><tspan
+ id="text4189-1-8-4"><tspan
sodipodi:role="line"
x="136.53352"
y="1002.785"
- id="tspan4193-3-2-7">Traffic Generator</tspan></text>
+ id="tspan4193-3-2-7"
+ style="font-size:12.5px;line-height:1.25">Traffic Generator</tspan></text>
<path
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker7331)"
d="m 351.46948,927.02357 c 0,57.5787 0,57.5787 0,57.5787"
diff --git a/doc/guides/howto/packet_capture_framework.rst b/doc/guides/howto/packet_capture_framework.rst
index c31bac52340e..78baa609a021 100644
--- a/doc/guides/howto/packet_capture_framework.rst
+++ b/doc/guides/howto/packet_capture_framework.rst
@@ -1,18 +1,19 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright(c) 2017 Intel Corporation.
-DPDK pdump Library and pdump Tool
-=================================
+DPDK packet capture libraries and tools
+=======================================
This document describes how the Data Plane Development Kit (DPDK) Packet
Capture Framework is used for capturing packets on DPDK ports. It is intended
for users of DPDK who want to know more about the Packet Capture feature and
for those who want to monitor traffic on DPDK-controlled devices.
-The DPDK packet capture framework was introduced in DPDK v16.07. The DPDK
-packet capture framework consists of the DPDK pdump library and DPDK pdump
-tool.
-
+The DPDK packet capture framework was introduced in DPDK v16.07 and
+enhanced in 21.1. The DPDK packet capture framework consists of the
+libraries for collecting packets ``librte_pdump`` and writing packets
+to a file ``librte_pcapng``. There are two sample applications:
+``dpdk-dumpcap`` and older ``dpdk-pdump``.
Introduction
------------
@@ -22,43 +23,46 @@ allow users to initialize the packet capture framework and to enable or
disable packet capture. The library works on a multi process communication model and its
usage is recommended for debugging purposes.
-The :ref:`dpdk-pdump <pdump_tool>` tool is developed based on the
-``librte_pdump`` library. It runs as a DPDK secondary process and is capable
-of enabling or disabling packet capture on DPDK ports. The ``dpdk-pdump`` tool
-provides command-line options with which users can request enabling or
-disabling of the packet capture on DPDK ports.
+The :ref:`librte_pcapng <pcapng_library>` library provides the APIs to format
+packets and write them to a file in Pcapng format.
+
+
+The :ref:`dpdk-dumpcap <dumpcap_tool>` is a tool that captures packets in
+like Wireshark dumpcap does for Linux. It runs as a DPDK secondary process and
+captures packets from one or more interfaces and writes them to a file
+in Pcapng format. The ``dpdk-dumpcap`` tool is designed to take
+most of the same options as the Wireshark ``dumpcap`` command.
-The application which initializes the packet capture framework will be a primary process
-and the application that enables or disables the packet capture will
-be a secondary process. The primary process sends the Rx and Tx packets from the DPDK ports
-to the secondary process.
+Without any options it will use the packet capture framework to
+capture traffic from the first available DPDK port.
In DPDK the ``testpmd`` application can be used to initialize the packet
-capture framework and acts as a server, and the ``dpdk-pdump`` tool acts as a
+capture framework and acts as a server, and the ``dpdk-dumpcap`` tool acts as a
client. To view Rx or Tx packets of ``testpmd``, the application should be
-launched first, and then the ``dpdk-pdump`` tool. Packets from ``testpmd``
-will be sent to the tool, which then sends them on to the Pcap PMD device and
-that device writes them to the Pcap file or to an external interface depending
-on the command-line option used.
+launched first, and then the ``dpdk-dumpcap`` tool. Packets from ``testpmd``
+will be sent to the tool, and then to the Pcapng file.
Some things to note:
-* The ``dpdk-pdump`` tool can only be used in conjunction with a primary
+* All tools using ``librte_pdump`` can only be used in conjunction with a primary
application which has the packet capture framework initialized already. In
dpdk, only ``testpmd`` is modified to initialize packet capture framework,
- other applications remain untouched. So, if the ``dpdk-pdump`` tool has to
+ other applications remain untouched. So, if the ``dpdk-dumpcap`` tool has to
be used with any application other than the testpmd, the user needs to
explicitly modify that application to call the packet capture framework
initialization code. Refer to the ``app/test-pmd/testpmd.c`` code and look
for ``pdump`` keyword to see how this is done.
-* The ``dpdk-pdump`` tool depends on the libpcap based PMD.
+* The ``dpdk-pdump`` tool is an older tool created as demonstration of ``librte_pdump``
+ library. The ``dpdk-pdump`` tool provides more limited functionality and
+ and depends on the Pcap PMD. It is retained only for compatibility reasons;
+ users should use ``dpdk-dumpcap`` instead.
Test Environment
----------------
-The overview of using the Packet Capture Framework and the ``dpdk-pdump`` tool
+The overview of using the Packet Capture Framework and the ``dpdk-dumpcap`` utility
for packet capturing on the DPDK port in
:numref:`figure_packet_capture_framework`.
@@ -66,13 +70,13 @@ for packet capturing on the DPDK port in
.. figure:: img/packet_capture_framework.*
- Packet capturing on a DPDK port using the dpdk-pdump tool.
+ Packet capturing on a DPDK port using the dpdk-dumpcap utility.
Running the Application
-----------------------
-The following steps demonstrate how to run the ``dpdk-pdump`` tool to capture
+The following steps demonstrate how to run the ``dpdk-dumpcap`` tool to capture
Rx side packets on dpdk_port0 in :numref:`figure_packet_capture_framework` and
inspect them using ``tcpdump``.
@@ -80,16 +84,15 @@ inspect them using ``tcpdump``.
sudo <build_dir>/app/dpdk-testpmd -c 0xf0 -n 4 -- -i --port-topology=chained
-#. Launch the pdump tool as follows::
+#. Launch the dpdk-dump as follows::
- sudo <build_dir>/app/dpdk-pdump -- \
- --pdump 'port=0,queue=*,rx-dev=/tmp/capture.pcap'
+ sudo <build_dir>/app/dpdk-dumpcap -w /tmp/capture.pcapng
#. Send traffic to dpdk_port0 from traffic generator.
- Inspect packets captured in the file capture.pcap using a tool
- that can interpret Pcap files, for example tcpdump::
+ Inspect packets captured in the file capture.pcap using a tool such as
+ tcpdump or tshark that can interpret Pcapng files::
- $tcpdump -nr /tmp/capture.pcap
+ $ tcpdump -nr /tmp/capture.pcapng
reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)
11:11:36.891404 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
11:11:36.891442 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 2dce507f46a3..b440c77c2ba1 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -43,6 +43,7 @@ Programmer's Guide
ip_fragment_reassembly_lib
generic_receive_offload_lib
generic_segmentation_offload_lib
+ pcapng_lib
pdump_lib
multi_proc_support
kernel_nic_interface
diff --git a/doc/guides/prog_guide/pcapng_lib.rst b/doc/guides/prog_guide/pcapng_lib.rst
new file mode 100644
index 000000000000..36379b530a57
--- /dev/null
+++ b/doc/guides/prog_guide/pcapng_lib.rst
@@ -0,0 +1,24 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2016 Intel Corporation.
+
+.. _pcapng_library:
+
+Packet Capture File Writer
+==========================
+
+Pcapng is a library for creating files in Pcapng file format.
+The Pcapng file format is the default capture file format for modern
+network capture processing tools. It can be read by wireshark and tcpdump.
+
+Usage
+-----
+
+Before the library can be used the function ``rte_pcapng_init``
+should be called once to initialize timestamp computation.
+
+
+References
+----------
+* Draft RFC https://www.ietf.org/id/draft-tuexen-opsawg-pcapng-03.html
+
+* Project repository https://github.com/pcapng/pcapng/
diff --git a/doc/guides/prog_guide/pdump_lib.rst b/doc/guides/prog_guide/pdump_lib.rst
index 62c0b015b2fe..9af91415e5ea 100644
--- a/doc/guides/prog_guide/pdump_lib.rst
+++ b/doc/guides/prog_guide/pdump_lib.rst
@@ -3,10 +3,10 @@
.. _pdump_library:
-The librte_pdump Library
-========================
+The Packet Capture Library
+==========================
-The ``librte_pdump`` library provides a framework for packet capturing in DPDK.
+The DPDK ``pdump`` library provides a framework for packet capturing in DPDK.
The library does the complete copy of the Rx and Tx mbufs to a new mempool and
hence it slows down the performance of the applications, so it is recommended
to use this library for debugging purposes.
@@ -23,11 +23,19 @@ or disable the packet capture, and to uninitialize it.
* ``rte_pdump_enable()``:
This API enables the packet capture on a given port and queue.
- Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf()``
+ This API enables the packet capture on a given port and queue.
+ It also allows setting an optional filter using DPDK BPF interpreter and
+ setting the captured packet length.
* ``rte_pdump_enable_by_deviceid()``:
This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
- Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf_by_deviceid()``
+ This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
+ It also allows seating an optional filter using DPDK BPF interpreter and
+ setting the captured packet length.
* ``rte_pdump_disable()``:
This API disables the packet capture on a given port and queue.
@@ -61,6 +69,12 @@ and enables the packet capture by registering the Ethernet RX and TX callbacks f
and queue combinations. Then the primary process will mirror the packets to the new mempool and enqueue them to
the rte_ring that secondary process have passed to these APIs.
+The packet ring supports one of two formats. The default format enqueues copies of the original packets
+into the rte_ring. If the ``RTE_PDUMP_FLAG_PCAPNG`` is set the mbuf data is extended with header and trailer
+to match the format of Pcapng enhanced packet block. The enhanced packet block has meta-data such as the
+timestamp, port and queue the packet was captured on. It is up to the application consuming the
+packets from the ring to select the format desired.
+
The library APIs ``rte_pdump_disable()`` and ``rte_pdump_disable_by_deviceid()`` disables the packet capture.
For the calls to these APIs from secondary process, the library creates the "pdump disable" request and sends
the request to the primary process over the multi process channel. The primary process takes this request and
@@ -74,5 +88,5 @@ function.
Use Case: Packet Capturing
--------------------------
-The DPDK ``app/pdump`` tool is developed based on this library to capture packets in DPDK.
-Users can use this as an example to develop their own packet capturing tools.
+The DPDK ``app/dpdk-dumpcap`` utility uses this library
+to capture packets in DPDK.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 675b5738348b..ee24cbfdb99d 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -62,6 +62,16 @@ New Features
* Added bus-level parsing of the devargs syntax.
* Kept compatibility with the legacy syntax as parsing fallback.
+* **Enhance Packet capture.**
+
+ * New dpdk-dumpcap program that has most of the features of the
+ wireshark dumpcap utility including capture of multiple interfaces,
+ stopping after number of bytes, packets.
+ * New library for writing pcapng packet capture files.
+ * Enhancement to the pdump library to support:
+ * Packet filter with BPF.
+ * Pcapng format with timestamps and meta-data.
+ * Fixes packet capture with stripped VLAN tags.
Removed Items
-------------
diff --git a/doc/guides/tools/dumpcap.rst b/doc/guides/tools/dumpcap.rst
new file mode 100644
index 000000000000..90993bd5be5e
--- /dev/null
+++ b/doc/guides/tools/dumpcap.rst
@@ -0,0 +1,80 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2020 Microsoft Corporation.
+
+.. _dumpcap_tool:
+
+dpdk-dumpcap Application
+========================
+
+The ``dpdk-dumpcap`` tool is a Data Plane Development Kit (DPDK)
+network traffic dump tool. The interface is similar to the dumpcap tool in Wireshark.
+It runs as a secondary DPDK process and lets you capture packets that are
+coming into and out of a DPDK primary process.
+The ``dpdk-dumpcap`` writes files in Pcapng packet format using
+capture file format is pcapng.
+
+Without any options set it will use DPDK to capture traffic from the first
+available DPDK interface and write the received raw packet data, along
+with timestamps into a pcapng file.
+
+If the ``-w`` option is not specified, ``dpdk-dumpcap`` writes to a newly
+create file with a name chosen based on interface name and timestamp.
+If ``-w`` option is specified, then that file is used.
+
+ .. Note::
+ * The ``dpdk-dumpcap`` tool can only be used in conjunction with a primary
+ application which has the packet capture framework initialized already.
+ In dpdk, only the ``testpmd`` is modified to initialize packet capture
+ framework, other applications remain untouched. So, if the ``dpdk-dumpcap``
+ tool has to be used with any application other than the testpmd, user
+ needs to explicitly modify that application to call packet capture
+ framework initialization code. Refer ``app/test-pmd/testpmd.c``
+ code to see how this is done.
+
+ * The ``dpdk-dumpcap`` tool runs as a DPDK secondary process. It exits when
+ the primary application exits.
+
+
+Running the Application
+-----------------------
+
+To list interfaces available for capture use ``--list-interfaces``.
+
+To capture on multiple interfaces at once, use multiple ``-I`` flags.
+
+Example
+-------
+
+.. code-block:: console
+
+ # ./<build_dir>/app/dpdk-dumpcap --list-interfaces
+ 0. 000:00:03.0
+ 1. 000:00:03.1
+
+ # ./<build_dir>/app/dpdk-dumpcap -I 0000:00:03.0 -c 6 -w /tmp/sample.pcapng
+ Packets captured: 6
+ Packets received/dropped on interface '0000:00:03.0' 6/0
+
+
+Limitations
+-----------
+The following options of Wireshark ``dumpcap`` are not yet implemented:
+
+ * ``-f <capture filter>`` -- needs translation from classic BPF to eBPF.
+
+ * ``-b|--ring-buffer`` -- more complex file management.
+
+ * ``-C <byte_limit>`` -- doesn't make sense in DPDK model.
+
+ * ``-t`` -- use a thread per interface
+
+ * Timestamp type.
+
+ * Link data types. Only EN10MB (Ethernet) is supported.
+
+ * Wireless related options: ``-I|--monitor-mode`` and ``-k <freq>``
+
+
+.. Note::
+ * The options to ``dpdk-dumpcap`` are like the Wireshark dumpcap program and
+ are not the same as ``dpdk-pdump`` and other DPDK applications.
diff --git a/doc/guides/tools/index.rst b/doc/guides/tools/index.rst
index 93dde4148e90..b71c12b8f2dd 100644
--- a/doc/guides/tools/index.rst
+++ b/doc/guides/tools/index.rst
@@ -8,6 +8,7 @@ DPDK Tools User Guides
:maxdepth: 2
:numbered:
+ dumpcap
proc_info
pdump
pmdinfo
--
2.30.2
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v2 2/5] pdump: support pcapng and filtering
@ 2021-09-03 22:06 1% ` Stephen Hemminger
2021-09-03 22:06 1% ` [dpdk-dev] [PATCH v2 4/5] doc: changes for new pcapng and dumpcap Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-03 22:06 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
This enhances the DPDK pdump library to support new
pcapng format and filtering via BPF.
The internal client/server protocol is changed to support
two versions: the original pdump basic version and a
new pcapng version.
The internal version number (not part of exposed API or ABI)
is intentionally increased to cause any attempt to try
mismatched primary/secondary process to fail.
Add new API to do allow filtering of captured packets with
DPDK BPF (eBPF) filter program. It keeps statistics
on packets captured, filtered, and missed (because ring was full).
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
lib/meson.build | 4 +-
lib/pdump/meson.build | 2 +-
lib/pdump/rte_pdump.c | 419 ++++++++++++++++++++++++++++++------------
lib/pdump/rte_pdump.h | 110 ++++++++++-
lib/pdump/version.map | 8 +
5 files changed, 415 insertions(+), 128 deletions(-)
diff --git a/lib/meson.build b/lib/meson.build
index 51bf9c2d11f0..4b01949fe715 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -26,6 +26,7 @@ libraries = [
'timer', # eventdev depends on this
'acl',
'bbdev',
+ 'bpf',
'bitratestats',
'cfgfile',
'compressdev',
@@ -43,7 +44,6 @@ libraries = [
'member',
'pcapng',
'power',
- 'pdump',
'rawdev',
'regexdev',
'rib',
@@ -55,10 +55,10 @@ libraries = [
'ipsec', # ipsec lib depends on net, crypto and security
'fib', #fib lib depends on rib
'port', # pkt framework libs which use other libs from above
+ 'pdump', # pdump lib depends on bpf pcapng
'table',
'pipeline',
'flow_classify', # flow_classify lib depends on pkt framework table lib
- 'bpf',
'graph',
'node',
]
diff --git a/lib/pdump/meson.build b/lib/pdump/meson.build
index 3a95eabde6a6..51ceb2afdec5 100644
--- a/lib/pdump/meson.build
+++ b/lib/pdump/meson.build
@@ -3,4 +3,4 @@
sources = files('rte_pdump.c')
headers = files('rte_pdump.h')
-deps += ['ethdev']
+deps += ['ethdev', 'bpf', 'pcapng']
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 382217bc1564..dde536c2e047 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -7,8 +7,10 @@
#include <rte_ethdev.h>
#include <rte_lcore.h>
#include <rte_log.h>
+#include <rte_memzone.h>
#include <rte_errno.h>
#include <rte_string_fns.h>
+#include <rte_pcapng.h>
#include "rte_pdump.h"
@@ -27,30 +29,26 @@ enum pdump_operation {
ENABLE = 2
};
+/*
+ * Note: version numbers intentionally start at 3
+ * in order to catch any application built with older out
+ * version of DPDK using incompatiable client request format.
+ */
enum pdump_version {
- V1 = 1
+ PDUMP_CLIENT_LEGACY = 3,
+ PDUMP_CLIENT_PCAPNG = 4,
};
struct pdump_request {
uint16_t ver;
uint16_t op;
- uint32_t flags;
- union pdump_data {
- struct enable_v1 {
- char device[RTE_DEV_NAME_MAX_LEN];
- uint16_t queue;
- struct rte_ring *ring;
- struct rte_mempool *mp;
- void *filter;
- } en_v1;
- struct disable_v1 {
- char device[RTE_DEV_NAME_MAX_LEN];
- uint16_t queue;
- struct rte_ring *ring;
- struct rte_mempool *mp;
- void *filter;
- } dis_v1;
- } data;
+ uint16_t flags;
+ uint16_t queue;
+ struct rte_ring *ring;
+ struct rte_mempool *mp;
+ const struct rte_bpf *filter;
+ uint32_t snaplen;
+ char device[RTE_DEV_NAME_MAX_LEN];
};
struct pdump_response {
@@ -63,80 +61,137 @@ static struct pdump_rxtx_cbs {
struct rte_ring *ring;
struct rte_mempool *mp;
const struct rte_eth_rxtx_callback *cb;
- void *filter;
+ const struct rte_bpf *filter;
+ enum pdump_version ver;
+ uint32_t snaplen;
} rx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
tx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
-
-static inline void
-pdump_copy(struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
+static const char *MZ_RTE_PDUMP_STATS = "rte_pdump_stats";
+
+/* Shared memory between primary and secondary processes. */
+static struct {
+ struct rte_pdump_stats rx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+ struct rte_pdump_stats tx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+} *pdump_stats;
+
+/* Create a clone of mbuf to be placed into ring. */
+static void
+pdump_copy(uint16_t port_id, uint16_t queue,
+ enum rte_pcapng_direction direction,
+ struct rte_mbuf **pkts, uint16_t nb_pkts,
+ const struct pdump_rxtx_cbs *cbs,
+ struct rte_pdump_stats *stats)
{
unsigned int i;
int ring_enq;
uint16_t d_pkts = 0;
struct rte_mbuf *dup_bufs[nb_pkts];
- struct pdump_rxtx_cbs *cbs;
+ uint64_t ts;
struct rte_ring *ring;
struct rte_mempool *mp;
struct rte_mbuf *p;
+ uint64_t bpf_rc[nb_pkts];
+
+ if (cbs->filter &&
+ !rte_bpf_exec_burst(cbs->filter, (void **)pkts, bpf_rc, nb_pkts)) {
+ /* All packets were filtered out */
+ __atomic_fetch_add(&stats->filtered, nb_pkts, __ATOMIC_RELAXED);
+ return;
+ }
- cbs = user_params;
+ ts = rte_get_tsc_cycles();
ring = cbs->ring;
mp = cbs->mp;
for (i = 0; i < nb_pkts; i++) {
- p = rte_pktmbuf_copy(pkts[i], mp, 0, UINT32_MAX);
- if (p)
+ /*
+ * Similar behavior to rte_bpf_eth callback.
+ * if BPF program returns zero value for a given packet,
+ * then it will be ignored.
+ */
+ if (cbs->filter && bpf_rc[i] == 0) {
+ __atomic_fetch_add(&stats->filtered,
+ 1, __ATOMIC_RELAXED);
+ continue;
+ }
+
+ /*
+ * If using pcapng then want to wrap packets
+ * otherwise a simple copy.
+ */
+ if (cbs->ver == PDUMP_CLIENT_PCAPNG)
+ p = rte_pcapng_copy(port_id, queue,
+ pkts[i], mp, cbs->snaplen,
+ ts, direction);
+ else
+ p = rte_pktmbuf_copy(pkts[i], mp, 0, cbs->snaplen);
+
+ if (unlikely(p == NULL))
+ __atomic_fetch_add(&stats->nombuf, 1, __ATOMIC_RELAXED);
+ else
dup_bufs[d_pkts++] = p;
}
+ __atomic_fetch_add(&stats->accepted, d_pkts, __ATOMIC_RELAXED);
+
ring_enq = rte_ring_enqueue_burst(ring, (void *)dup_bufs, d_pkts, NULL);
if (unlikely(ring_enq < d_pkts)) {
unsigned int drops = d_pkts - ring_enq;
- PDUMP_LOG(DEBUG,
- "only %d of packets enqueued to ring\n", ring_enq);
+ __atomic_fetch_add(&stats->ringfull, drops, __ATOMIC_RELAXED);
rte_pktmbuf_free_bulk(&dup_bufs[ring_enq], drops);
}
}
static uint16_t
-pdump_rx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_rx(uint16_t port, uint16_t queue,
struct rte_mbuf **pkts, uint16_t nb_pkts,
- uint16_t max_pkts __rte_unused,
- void *user_params)
+ uint16_t max_pkts __rte_unused, void *user_params)
{
- pdump_copy(pkts, nb_pkts, user_params);
+ const struct pdump_rxtx_cbs *cbs = user_params;
+ struct rte_pdump_stats *stats = &pdump_stats->rx[port][queue];
+
+ pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_IN,
+ pkts, nb_pkts, cbs, stats);
return nb_pkts;
}
static uint16_t
-pdump_tx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_tx(uint16_t port, uint16_t queue,
struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
{
- pdump_copy(pkts, nb_pkts, user_params);
+ const struct pdump_rxtx_cbs *cbs = user_params;
+ struct rte_pdump_stats *stats = &pdump_stats->tx[port][queue];
+
+ pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_OUT,
+ pkts, nb_pkts, cbs, stats);
return nb_pkts;
}
static int
-pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
- struct rte_ring *ring, struct rte_mempool *mp,
- uint16_t operation)
+pdump_register_rx_callbacks(enum pdump_version ver,
+ uint16_t end_q, uint16_t port, uint16_t queue,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ uint16_t operation, uint32_t snaplen)
{
uint16_t qid;
- struct pdump_rxtx_cbs *cbs = NULL;
qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
for (; qid < end_q; qid++) {
- cbs = &rx_cbs[port][qid];
- if (cbs && operation == ENABLE) {
+ struct pdump_rxtx_cbs *cbs = &rx_cbs[port][qid];
+
+ if (operation == ENABLE) {
if (cbs->cb) {
PDUMP_LOG(ERR,
"rx callback for port=%d queue=%d, already exists\n",
port, qid);
return -EEXIST;
}
+ cbs->ver = ver;
cbs->ring = ring;
cbs->mp = mp;
+ cbs->snaplen = snaplen;
+
cbs->cb = rte_eth_add_first_rx_callback(port, qid,
pdump_rx, cbs);
if (cbs->cb == NULL) {
@@ -145,8 +200,7 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
rte_errno);
return rte_errno;
}
- }
- if (cbs && operation == DISABLE) {
+ } else if (operation == DISABLE) {
int ret;
if (cbs->cb == NULL) {
@@ -170,26 +224,30 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
}
static int
-pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
- struct rte_ring *ring, struct rte_mempool *mp,
- uint16_t operation)
+pdump_register_tx_callbacks(enum pdump_version ver,
+ uint16_t end_q, uint16_t port, uint16_t queue,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ uint16_t operation, uint32_t snaplen)
{
uint16_t qid;
- struct pdump_rxtx_cbs *cbs = NULL;
qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
for (; qid < end_q; qid++) {
- cbs = &tx_cbs[port][qid];
- if (cbs && operation == ENABLE) {
+ struct pdump_rxtx_cbs *cbs = &tx_cbs[port][qid];
+
+ if (operation == ENABLE) {
if (cbs->cb) {
PDUMP_LOG(ERR,
"tx callback for port=%d queue=%d, already exists\n",
port, qid);
return -EEXIST;
}
+ cbs->ver = ver;
cbs->ring = ring;
cbs->mp = mp;
+ cbs->snaplen = snaplen;
+
cbs->cb = rte_eth_add_tx_callback(port, qid, pdump_tx,
cbs);
if (cbs->cb == NULL) {
@@ -198,8 +256,7 @@ pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
rte_errno);
return rte_errno;
}
- }
- if (cbs && operation == DISABLE) {
+ } else if (operation == DISABLE) {
int ret;
if (cbs->cb == NULL) {
@@ -233,32 +290,25 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
struct rte_ring *ring;
struct rte_mempool *mp;
+ if (!(p->ver == PDUMP_CLIENT_LEGACY ||
+ p->ver == PDUMP_CLIENT_PCAPNG)) {
+ PDUMP_LOG(ERR,
+ "incorrect client version %u\n", p->ver);
+ return -EINVAL;
+ }
+
flags = p->flags;
operation = p->op;
- if (operation == ENABLE) {
- ret = rte_eth_dev_get_port_by_name(p->data.en_v1.device,
- &port);
- if (ret < 0) {
- PDUMP_LOG(ERR,
- "failed to get port id for device id=%s\n",
- p->data.en_v1.device);
- return -EINVAL;
- }
- queue = p->data.en_v1.queue;
- ring = p->data.en_v1.ring;
- mp = p->data.en_v1.mp;
- } else {
- ret = rte_eth_dev_get_port_by_name(p->data.dis_v1.device,
- &port);
- if (ret < 0) {
- PDUMP_LOG(ERR,
- "failed to get port id for device id=%s\n",
- p->data.dis_v1.device);
- return -EINVAL;
- }
- queue = p->data.dis_v1.queue;
- ring = p->data.dis_v1.ring;
- mp = p->data.dis_v1.mp;
+ queue = p->queue;
+ ring = p->ring;
+ mp = p->mp;
+
+ ret = rte_eth_dev_get_port_by_name(p->device, &port);
+ if (ret < 0) {
+ PDUMP_LOG(ERR,
+ "failed to get port id for device id=%s\n",
+ p->device);
+ return -EINVAL;
}
/* validation if packet capture is for all queues */
@@ -296,8 +346,8 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
/* register RX callback */
if (flags & RTE_PDUMP_FLAG_RX) {
end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_rx_q : queue + 1;
- ret = pdump_register_rx_callbacks(end_q, port, queue, ring, mp,
- operation);
+ ret = pdump_register_rx_callbacks(p->ver, end_q, port, queue, ring, mp,
+ operation, p->snaplen);
if (ret < 0)
return ret;
}
@@ -305,8 +355,8 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
/* register TX callback */
if (flags & RTE_PDUMP_FLAG_TX) {
end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_tx_q : queue + 1;
- ret = pdump_register_tx_callbacks(end_q, port, queue, ring, mp,
- operation);
+ ret = pdump_register_tx_callbacks(p->ver, end_q, port, queue, ring, mp,
+ operation, p->snaplen);
if (ret < 0)
return ret;
}
@@ -347,8 +397,20 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
int
rte_pdump_init(void)
{
+ const struct rte_memzone *mz;
int ret;
+ mz = rte_memzone_reserve(MZ_RTE_PDUMP_STATS, sizeof(*pdump_stats),
+ rte_socket_id(), 0);
+ if (mz == NULL) {
+ PDUMP_LOG(ERR, "cannot allocate pdump statistics\n");
+ rte_errno = ENOMEM;
+ return -1;
+ }
+ pdump_stats = mz->addr;
+
+ rte_pcapng_init();
+
ret = rte_mp_action_register(PDUMP_MP, pdump_server);
if (ret && rte_errno != ENOTSUP)
return -1;
@@ -392,14 +454,21 @@ pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp)
static int
pdump_validate_flags(uint32_t flags)
{
- if (flags != RTE_PDUMP_FLAG_RX && flags != RTE_PDUMP_FLAG_TX &&
- flags != RTE_PDUMP_FLAG_RXTX) {
+ if ((flags & RTE_PDUMP_FLAG_RXTX) == 0) {
PDUMP_LOG(ERR,
"invalid flags, should be either rx/tx/rxtx\n");
rte_errno = EINVAL;
return -1;
}
+ /* mask off the flags we know about */
+ if (flags & ~(RTE_PDUMP_FLAG_RXTX | RTE_PDUMP_FLAG_PCAPNG)) {
+ PDUMP_LOG(ERR,
+ "unknown flags: %#x\n", flags);
+ rte_errno = ENOTSUP;
+ return -1;
+ }
+
return 0;
}
@@ -426,12 +495,12 @@ pdump_validate_port(uint16_t port, char *name)
}
static int
-pdump_prepare_client_request(char *device, uint16_t queue,
- uint32_t flags,
- uint16_t operation,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+pdump_prepare_client_request(const char *device, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ uint16_t operation,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf *filter)
{
int ret = -1;
struct rte_mp_msg mp_req, *mp_rep;
@@ -440,23 +509,23 @@ pdump_prepare_client_request(char *device, uint16_t queue,
struct pdump_request *req = (struct pdump_request *)mp_req.param;
struct pdump_response *resp;
- req->ver = 1;
- req->flags = flags;
+ memset(req, 0, sizeof(*req));
+ if (flags & RTE_PDUMP_FLAG_PCAPNG)
+ req->ver = PDUMP_CLIENT_PCAPNG;
+ else
+ req->ver = PDUMP_CLIENT_LEGACY;
+
+ req->flags = flags & RTE_PDUMP_FLAG_RXTX;
req->op = operation;
+ req->queue = queue;
+ strlcpy(req->device, device,sizeof(req->device));
+
if ((operation & ENABLE) != 0) {
- strlcpy(req->data.en_v1.device, device,
- sizeof(req->data.en_v1.device));
- req->data.en_v1.queue = queue;
- req->data.en_v1.ring = ring;
- req->data.en_v1.mp = mp;
- req->data.en_v1.filter = filter;
- } else {
- strlcpy(req->data.dis_v1.device, device,
- sizeof(req->data.dis_v1.device));
- req->data.dis_v1.queue = queue;
- req->data.dis_v1.ring = NULL;
- req->data.dis_v1.mp = NULL;
- req->data.dis_v1.filter = NULL;
+ req->queue = queue;
+ req->ring = ring;
+ req->mp = mp;
+ req->filter = filter;
+ req->snaplen = snaplen;
}
strlcpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
@@ -477,11 +546,17 @@ pdump_prepare_client_request(char *device, uint16_t queue,
return ret;
}
-int
-rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+/*
+ * There are two versions of this function, because although original API
+ * left place holder for future filter, it never checked the value.
+ * Therefore the API can't depend on application passing a non
+ * bogus value.
+ */
+static int
+pdump_enable(uint16_t port, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ const struct rte_bpf *filter)
{
int ret;
char name[RTE_DEV_NAME_MAX_LEN];
@@ -496,20 +571,42 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(name, queue, flags,
- ENABLE, ring, mp, filter);
+ if (snaplen == 0)
+ snaplen = UINT32_MAX;
- return ret;
+ return pdump_prepare_client_request(name, queue, flags, snaplen,
+ ENABLE, ring, mp, filter);
}
int
-rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
- uint32_t flags,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ void *filter __rte_unused)
{
- int ret = 0;
+ return pdump_enable(port, queue, flags, 0,
+ ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf(uint16_t port, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf *filter)
+{
+ return pdump_enable(port, queue, flags, snaplen,
+ ring, mp, filter);
+}
+
+static int
+pdump_enable_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf *filter)
+{
+ int ret;
ret = pdump_validate_ring_mp(ring, mp);
if (ret < 0)
@@ -518,10 +615,30 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(device_id, queue, flags,
- ENABLE, ring, mp, filter);
+ return pdump_prepare_client_request(device_id, queue, flags, snaplen,
+ ENABLE, ring, mp, filter);
+}
- return ret;
+int
+rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
+ uint32_t flags,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ void *filter __rte_unused)
+{
+ return pdump_enable_by_deviceid(device_id, queue, flags, 0,
+ ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf *filter)
+{
+ return pdump_enable_by_deviceid(device_id, queue, flags, snaplen,
+ ring, mp, filter);
}
int
@@ -537,8 +654,8 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags)
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(name, queue, flags,
- DISABLE, NULL, NULL, NULL);
+ ret = pdump_prepare_client_request(name, queue, flags, 0,
+ DISABLE, NULL, NULL, NULL);
return ret;
}
@@ -553,8 +670,66 @@ rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(device_id, queue, flags,
- DISABLE, NULL, NULL, NULL);
+ ret = pdump_prepare_client_request(device_id, queue, flags, 0,
+ DISABLE, NULL, NULL, NULL);
return ret;
}
+
+static void
+pdump_sum_stats(uint16_t port, uint16_t nq,
+ const struct rte_pdump_stats stats[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
+ struct rte_pdump_stats *total)
+{
+ uint64_t *sum = (uint64_t *)total;
+ unsigned int i;
+ uint64_t val;
+ uint16_t qid;
+
+ for (qid = 0; qid < nq; qid++) {
+ const uint64_t *perq = (const uint64_t *)&stats[port][qid];
+
+ for (i = 0; i < sizeof(*total) / sizeof (uint64_t); i++) {
+ val = __atomic_load_n(&perq[i], __ATOMIC_RELAXED);
+ sum[i] += val;
+ }
+ }
+}
+
+int
+rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats)
+{
+ struct rte_eth_dev_info dev_info;
+ const struct rte_memzone *mz;
+ int ret;
+
+ memset(stats, 0, sizeof(*stats));
+ ret = rte_eth_dev_info_get(port, &dev_info);
+ if (ret != 0) {
+ PDUMP_LOG(ERR,
+ "Error during getting device (port %u) info: %s\n",
+ port, strerror(-ret));
+ return ret;
+ }
+
+ if (pdump_stats == NULL) {
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ PDUMP_LOG(ERR,
+ "pdump not initialized\n");
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ mz = rte_memzone_lookup(MZ_RTE_PDUMP_STATS);
+ if (mz == NULL) {
+ PDUMP_LOG(ERR, "cannont find pdump stats\n");
+ rte_errno = EINVAL;
+ return -1;
+ }
+ pdump_stats = mz->addr;
+ }
+
+ pdump_sum_stats(port, dev_info.nb_rx_queues, pdump_stats->rx, stats);
+ pdump_sum_stats(port, dev_info.nb_tx_queues, pdump_stats->tx, stats);
+ return 0;
+}
diff --git a/lib/pdump/rte_pdump.h b/lib/pdump/rte_pdump.h
index 6b00fc17aeb2..03320f5ac2ab 100644
--- a/lib/pdump/rte_pdump.h
+++ b/lib/pdump/rte_pdump.h
@@ -15,6 +15,7 @@
#include <stdint.h>
#include <rte_mempool.h>
#include <rte_ring.h>
+#include <rte_bpf.h>
#ifdef __cplusplus
extern "C" {
@@ -26,7 +27,9 @@ enum {
RTE_PDUMP_FLAG_RX = 1, /* receive direction */
RTE_PDUMP_FLAG_TX = 2, /* transmit direction */
/* both receive and transmit directions */
- RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX)
+ RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX),
+
+ RTE_PDUMP_FLAG_PCAPNG = 4, /* format for pcapng */
};
/**
@@ -68,7 +71,7 @@ rte_pdump_uninit(void);
* @param mp
* mempool on to which original packets will be mirrored or duplicated.
* @param filter
- * place holder for packet filtering.
+ * Unused should be NULL.
*
* @return
* 0 on success, -1 on error, rte_errno is set accordingly.
@@ -80,6 +83,41 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
struct rte_mempool *mp,
void *filter);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given port and queue with filtering.
+ *
+ * @param port_id
+ * The Ethernet port on which packet capturing should be enabled.
+ * @param queue
+ * The queue on the Ethernet port which packet capturing
+ * should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ * queues of a given port.
+ * @param flags
+ * Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ * The upper limit on bytes to copy.
+ * Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ * The ring on which captured packets will be enqueued for user.
+ * @param mp
+ * The mempool on to which original packets will be mirrored or duplicated.
+ * @param filter
+ * Use BPF filter to run to filter packes (can be NULL)
+ *
+ * @return
+ * 0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf(uint16_t port_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf *filter);
+
/**
* Disables packet capturing on given port and queue.
*
@@ -118,7 +156,7 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags);
* @param mp
* mempool on to which original packets will be mirrored or duplicated.
* @param filter
- * place holder for packet filtering.
+ * unused should be NULL
*
* @return
* 0 on success, -1 on error, rte_errno is set accordingly.
@@ -131,6 +169,43 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
struct rte_mempool *mp,
void *filter);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given device id and queue with filtering.
+ * device_id can be name or pci address of device.
+ *
+ * @param device_id
+ * device id on which packet capturing should be enabled.
+ * @param queue
+ * The queue on the Ethernet port which packet capturing
+ * should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ * queues of a given port.
+ * @param flags
+ * Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ * The upper limit on bytes to copy.
+ * Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ * The ring on which captured packets will be enqueued for user.
+ * @param mp
+ * The mempool on to which original packets will be mirrored or duplicated.
+ * @param filter
+ * Use BPF filter to run to filter packes (can be NULL)
+ *
+ * @return
+ * 0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf *filter);
+
+
/**
* Disables packet capturing on given device_id and queue.
* device_id can be name or pci address of device.
@@ -153,6 +228,35 @@ int
rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
uint32_t flags);
+
+/**
+ * A structure used to retrieve statistics from packet capture.
+ * The statistics are sum of both receive and transmit queues.
+ */
+struct rte_pdump_stats {
+ uint64_t accepted; /**< Number of packets accepted by filter. */
+ uint64_t filtered; /**< Number of packets rejected by filter. */
+ uint64_t nombuf; /**< Number of mbuf allocation failures. */
+ uint64_t ringfull; /**< Number of missed packets due to ring full. */
+
+ uint64_t reserved[4]; /**< Reserved and pad to cache line */
+};
+
+/**
+ * Retrieve the packet capture statistics for a queue.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param stats
+ * A pointer to structure of type *rte_pdump_stats* to be filled in.
+ * @return
+ * Zero if successful. -1 on error and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_pdump_stats(uint16_t port_id, struct rte_pdump_stats *stats);
+
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/pdump/version.map b/lib/pdump/version.map
index f0a9d12c9a9e..ce5502d9cdf4 100644
--- a/lib/pdump/version.map
+++ b/lib/pdump/version.map
@@ -10,3 +10,11 @@ DPDK_22 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ rte_pdump_enable_bpf;
+ rte_pdump_enable_bpf_by_deviceid;
+ rte_pdump_stats;
+};
--
2.30.2
^ permalink raw reply [relevance 1%]
* [dpdk-dev] DPDK Release Status Meeting 2021-09-02
@ 2021-09-03 16:31 3% Mcnamara, John
0 siblings, 0 replies; 200+ results
From: Mcnamara, John @ 2021-09-03 16:31 UTC (permalink / raw)
To: dev; +Cc: thomas, Yigit, Ferruh
Release status meeting minutes 2021-09-02
=========================================
Agenda:
* Release Dates
* Subtrees
* Roadmaps
* LTS
* Defects
* Opens
Participants:
* ARM
* Broadcom
* Canonical
* Intel
* Marvell
* Nvidia
* Red Hat
Release Dates
-------------
* v21.11 dates
* Proposal/V1: Friday, 10 September
* -rc1: Friday, 15 October
* Release: Friday, 19 November
Subtrees
--------
* main
* Reviews starting
* next-net
* Reviews ongoing
* Reviewing Ethdev patches
* Will pull from sub-tree today
* next-net-brcm
* Patchset of 14 patches pending on patchwork
* All other patches are merged into the Broadcom subtree
* next-net-intel
- Reviews and merging ongoing
* next-net-mlx
- PR not pulled due to comments that need to be addressed
- New version sent today.
* next-net-mrvl
- Some patches submitted and already pulled by Ferruh
* next-crypto
* Started merging
* PR ready early next week
* next-eventdev
* Arourn 15 patches
* Changes for API hiding (for ABI stability)
* next-virtio
* Reviewing patches
* Xilinx driver reviewed
* DMA Device RFC under review
* 30 patches on list
* New vdpa driver
LTS
---
* v19.11 (next version is v19.11.10)
* Awaiting test reports
* Proposed release: Sept 6
* v20.11 (next version is v20.11.3)
* Awaiting test reports
* Proposed release: Sept 6
* Distros
* v20.11 in Debian 11
* v20.11 in Ubuntu 21.04
Defects
-------
* Bugzilla links, 'Bugs', added for hosted projects
- https://www.dpdk.org/hosted-projects/
Opens
-----
* Call for roadmaps
* Proposal to submit roadmaps during the release week
DPDK Release Status Meetings
----------------------------
The DPDK Release Status Meeting is intended for DPDK Committers to discuss the status of the master tree and sub-trees, and for project managers to track progress or milestone dates.
The meeting occurs on every Thursdays at 8:30 UTC. on https://meet.jit.si/DPDK
If you wish to attend just send an email to "John McNamara <john.mcnamara@intel.com>" for the invite.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v10 2/3] devtools: script to send notifications of expired symbols
2021-09-01 12:46 0% ` Aaron Conole
2021-09-03 11:15 0% ` Kinsella, Ray
@ 2021-09-03 13:32 0% ` Kinsella, Ray
1 sibling, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-09-03 13:32 UTC (permalink / raw)
To: Aaron Conole
Cc: dev, bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor
On 01/09/2021 13:46, Aaron Conole wrote:
> Ray Kinsella <mdr@ashroe.eu> writes:
>
>> Use this script with the output of the DPDK symbol tool, to notify
>> maintainers of expired symbols by email. You need to define the environment
>> variable DPDK_GETMAINTAINER_PATH for this tool to work.
>>
>> Use terminal output to review the emails before sending.
>> e.g.
>> $ devtools/symbol-tool.py list-expired --format-output csv \
>> | DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
>> devtools/notify_expired_symbols.py --format-output terminal
>>
>> Then use email output to send the emails to the maintainers.
>> e.g.
>> $ devtools/symbol-tool.py list-expired --format-output csv \
>> | DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
>> devtools/notify_expired_symbols.py --format-output email \
>> --smtp-server <server> --sender <someone@somewhere.com> \
>> --password <password> --cc <someone@somewhere.com>
>>
>> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
>> ---
>> devtools/notify-symbol-maintainers.py | 256 ++++++++++++++++++++++++++
>> 1 file changed, 256 insertions(+)
>> create mode 100755 devtools/notify-symbol-maintainers.py
>>
>> diff --git a/devtools/notify-symbol-maintainers.py b/devtools/notify-symbol-maintainers.py
>> new file mode 100755
>> index 0000000000..ee554687ff
>> --- /dev/null
>> +++ b/devtools/notify-symbol-maintainers.py
>> @@ -0,0 +1,256 @@
>> +#!/usr/bin/env python3
>> +# SPDX-License-Identifier: BSD-3-Clause
>> +# Copyright(c) 2021 Intel Corporation
>> +# pylint: disable=invalid-name
>> +'''Tool to notify maintainers of expired symbols'''
>> +import smtplib
>> +import ssl
>> +import sys
>> +import subprocess
>> +import argparse
>> +from argparse import RawTextHelpFormatter
>> +import time
>> +from email.message import EmailMessage
>> +
>> +DESCRIPTION = '''
>> +Use this script with the output of the DPDK symbol tool, to notify maintainers
>> +and contributors of expired symbols by email. You need to define the environment
>> +variable DPDK_GETMAINTAINER_PATH for this tool to work.
>> +
>> +Use terminal output to review the emails before sending.
>> +e.g.
>> +$ devtools/symbol-tool.py list-expired --format-output csv \\
>> +| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
>> +{s} --format-output terminal
>> +
>> +Then use email output to send the emails to the maintainers.
>> +e.g.
>> +$ devtools/symbol-tool.py list-expired --format-output csv \\
>> +| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
>> +{s} --format-output email \\
>> +--smtp-server <server> --sender <someone@somewhere.com> --password <password> \\
>> +--cc <someone@somewhere.com>
>> +'''
>> +
>> +EMAIL_TEMPLATE = '''Hi there,
>> +
>> +Please note the symbols listed below have expired. In line with the DPDK ABI
>> +policy, they should be scheduled for removal, in the next DPDK release.
>> +
>> +For more information, please see the DPDK ABI Policy, section 3.5.3.
>> +https://doc.dpdk.org/guides/contributing/abi_policy.html
>> +
>> +Thanks,
>> +
>> +The DPDK Symbol Bot
>> +
>> +'''
>> +
>> +ABI_POLICY = 'doc/guides/contributing/abi_policy.rst'
>> +MAINTAINERS = 'MAINTAINERS'
>> +get_maintainer = ['devtools/get-maintainer.sh', \
>> + '--email', '-f']
>
> Maybe it's best to make this something that can be overridden. There's
> a series to change the .sh files to .py files. Perhaps an environment
> variable or argument?
>
>> +def _get_maintainers(libpath):
>> + '''Get the maintainers for given library'''
>> + try:
>> + cmd = get_maintainer + [libpath]
>> + result = subprocess.run(cmd, \
>> + stdout=subprocess.PIPE, \
>> + stderr=subprocess.PIPE,
>> + check=True)
>> + except subprocess.CalledProcessError:
>> + return None
>
> You might consider handling
>
> except FileNotFoundError:
> ....
>
> With a graceful exit and error message. In case the get_maintainers
> path changes.
>
So FYI - this get's into the weed a bit.
As there is already a DPDK_GETMAINTAINER_PATH environment variable,
what would you call a new variable.
So instead I added logic for the script to sanity check that _everything_
is defined and where it expects it to be, and then complain loudly
and die when it is not.
The devtools scripts already cross-reference either each, so I'd expect
any changes changing to get-maintainers.sh to get-maintainers.py to take
care of cross-references.
Ray K
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v10 2/3] devtools: script to send notifications of expired symbols
2021-09-01 13:01 5% ` David Marchand
@ 2021-09-03 13:28 0% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-09-03 13:28 UTC (permalink / raw)
To: David Marchand
Cc: dev, Bruce Richardson, Stephen Hemminger, Yigit, Ferruh,
Thomas Monjalon, Kevin Traynor, Aaron Conole
Hi David,
On 01/09/2021 14:01, David Marchand wrote:
> Hello Ray,
>
> On Tue, Aug 31, 2021 at 4:51 PM Ray Kinsella <mdr@ashroe.eu> wrote:
>>
>> Use this script with the output of the DPDK symbol tool, to notify
>> maintainers of expired symbols by email. You need to define the environment
>> variable DPDK_GETMAINTAINER_PATH for this tool to work.
>>
>> Use terminal output to review the emails before sending.
Just realized I missed this.
>
> Two comments:
> - there are references of a previous name for the script,
> %s/notify_expired_symbols.py/notify-symbol-maintainers.py/g
Fixed in v11 = I used __file__ instead.
> - and a reminder for the empty report that we received yesterday.
> I think this can be reproduced with:
Yes - I remember that, I will fix in v12.
>
> $ DPDK_GETMAINTAINER_PATH=devtools/get_maintainer.pl
> devtools/notify-symbol-maintainers.py --format-output terminal <<EOF
>> mapfile,expired (v21.08,v19.11),contributor name,contributor email
>> lib/rib,rte_rib6_get_ip,Stephen Hemminger,stephen@networkplumber.org
>> EOF
> To:Ray Kinsella <mdr@ashroe.eu>, Thomas Monjalon <thomas@monjalon.net>
> Reply-To:no-reply@dpdk.org
> Subject:Expired symbols in
>
> Body:Hi there,
>
> Please note the symbols listed below have expired. In line with the DPDK ABI
> policy, they should be scheduled for removal, in the next DPDK release.
>
> For more information, please see the DPDK ABI Policy, section 3.5.3.
> https://doc.dpdk.org/guides/contributing/abi_policy.html
>
> Thanks,
>
> The DPDK Symbol Bot
>
> Symbol Contributor
> Email
>
>
> --------------------------------------------------------------------------------
>
> ^^^^
> Here, empty report.
>
> To:Vladimir Medvedkin <vladimir.medvedkin@intel.com>, stephen@networkplumber.org
> Reply-To:no-reply@dpdk.org
> CC:Ray Kinsella <mdr@ashroe.eu>, Thomas Monjalon <thomas@monjalon.net>
> Subject:Expired symbols in lib/rib
>
> Body:Hi there,
>
> Please note the symbols listed below have expired. In line with the DPDK ABI
> policy, they should be scheduled for removal, in the next DPDK release.
>
> For more information, please see the DPDK ABI Policy, section 3.5.3.
> https://doc.dpdk.org/guides/contributing/abi_policy.html
>
> Thanks,
>
> The DPDK Symbol Bot
>
> Symbol Contributor
> Email
> rte_rib6_get_ip Stephen Hemminger
> stephen@networkplumber.org
>
>
> --------------------------------------------------------------------------------
>
>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v11 3/3] maintainers: add new abi scripts
2021-09-03 13:23 3% ` [dpdk-dev] [PATCH v11 " Ray Kinsella
2021-09-03 13:23 5% ` [dpdk-dev] [PATCH v11 1/3] devtools: script to track symbols over releases Ray Kinsella
2021-09-03 13:23 5% ` [dpdk-dev] [PATCH v11 2/3] devtools: script to send notifications of expired symbols Ray Kinsella
@ 2021-09-03 13:23 17% ` Ray Kinsella
2 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-09-03 13:23 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr, aconole
Add new abi management scripts to the MAINTAINERS file.
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
MAINTAINERS | 2 ++
1 file changed, 2 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 266f5ac1da..ff8245271f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -129,6 +129,8 @@ F: devtools/gen-abi.sh
F: devtools/libabigail.abignore
F: devtools/update-abi.sh
F: devtools/update_version_map_abi.py
+F: devtools/notify-symbol-maintainers.py
+F: devtools/symbol-tool.py
F: buildtools/check-symbols.sh
F: buildtools/map-list-symbol.sh
F: drivers/*/*/*.map
--
2.26.2
^ permalink raw reply [relevance 17%]
* [dpdk-dev] [PATCH v11 2/3] devtools: script to send notifications of expired symbols
2021-09-03 13:23 3% ` [dpdk-dev] [PATCH v11 " Ray Kinsella
2021-09-03 13:23 5% ` [dpdk-dev] [PATCH v11 1/3] devtools: script to track symbols over releases Ray Kinsella
@ 2021-09-03 13:23 5% ` Ray Kinsella
2021-09-03 13:23 17% ` [dpdk-dev] [PATCH v11 3/3] maintainers: add new abi scripts Ray Kinsella
2 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-09-03 13:23 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr, aconole
Use this script with the output of the DPDK symbol tool, to notify
maintainers of expired symbols by email. You need to define the environment
variable DPDK_GETMAINTAINER_PATH for this tool to work.
Use terminal output to review the emails before sending.
e.g.
$ devtools/symbol-tool.py list-expired --format-output csv \
| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
devtools/notify_expired_symbols.py --format-output terminal
Then use email output to send the emails to the maintainers.
e.g.
$ devtools/symbol-tool.py list-expired --format-output csv \
| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
devtools/notify_expired_symbols.py --format-output email \
--smtp-server <server> --sender <someone@somewhere.com> \
--password <password> --cc <someone@somewhere.com>
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
devtools/notify-symbol-maintainers.py | 302 ++++++++++++++++++++++++++
1 file changed, 302 insertions(+)
create mode 100755 devtools/notify-symbol-maintainers.py
diff --git a/devtools/notify-symbol-maintainers.py b/devtools/notify-symbol-maintainers.py
new file mode 100755
index 0000000000..edf330f88b
--- /dev/null
+++ b/devtools/notify-symbol-maintainers.py
@@ -0,0 +1,302 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+# pylint: disable=invalid-name
+'''Tool to notify maintainers of expired symbols'''
+import os
+import smtplib
+import ssl
+import sys
+import subprocess
+import argparse
+from argparse import RawTextHelpFormatter
+import time
+from email.message import EmailMessage
+from pathlib import Path
+
+DESCRIPTION = '''
+Use this script with the output of the DPDK symbol tool, to notify maintainers
+and contributors of expired symbols by email. You need to define the environment
+variable DPDK_GETMAINTAINER_PATH for this tool to work.
+
+Use terminal output to review the emails before sending.
+e.g.
+$ devtools/symbol-tool.py list-expired --format-output csv \\
+| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
+{s} --format-output terminal
+
+Then use email output to send the emails to the maintainers.
+e.g.
+$ devtools/symbol-tool.py list-expired --format-output csv \\
+| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
+{s} --format-output email \\
+--smtp-server <server> --sender <someone@somewhere.com> --password <password> \\
+--cc <someone@somewhere.com>
+''' # noqa: E501
+
+EMAIL_TEMPLATE = '''Hi there,
+
+Please note the symbols listed below have expired. In line with the DPDK ABI
+policy, they should be scheduled for removal, in the next DPDK release.
+
+For more information, please see the DPDK ABI Policy, section 3.5.3.
+https://doc.dpdk.org/guides/contributing/abi_policy.html
+
+Thanks,
+
+The DPDK Symbol Bot
+
+''' # noqa: E501
+
+ABI_POLICY = 'doc/guides/contributing/abi_policy.rst'
+DPDK_GMP_ENV_VAR = 'DPDK_GETMAINTAINER_PATH'
+MAINTAINERS = 'MAINTAINERS'
+get_maintainer = ['devtools/get-maintainer.sh',
+ '--email', '-f']
+
+
+class EnvironException(Exception):
+ '''Subclass exception for Pylint\'s happiness.'''
+
+
+def _die_on_exception(e):
+ '''Print an exception, and quit'''
+
+ print('Fatal Error: ' + str(e))
+ sys.exit()
+
+
+def _check_get_maintainers_env():
+ '''Check get maintainers scripts are setup'''
+
+ if not Path(get_maintainer[0]).is_file():
+ raise EnvironException('Cannot locate DPDK\'s get maintainers script, '
+ ' usually at $' + get_maintainer[0] + '.')
+
+ if DPDK_GMP_ENV_VAR not in os.environ:
+ raise EnvironException(DPDK_GMP_ENV_VAR + ' is not defined.')
+
+ if not Path(os.environ[DPDK_GMP_ENV_VAR]).is_file():
+ raise EnvironException('Cannot locate get maintainers script, usually'
+ ' at ' + DPDK_GMP_ENV_VAR + '.')
+
+
+def _get_maintainers(libpath):
+ '''Get the maintainers for given library'''
+
+ try:
+ _check_get_maintainers_env()
+ except EnvironException as e:
+ _die_on_exception(e)
+
+ try:
+ cmd = get_maintainer + [libpath]
+ result = subprocess.run(cmd,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError as e:
+ _die_on_exception(e)
+
+ if result is None:
+ return None
+
+ email = result.stdout.decode('utf-8')
+ if email == '':
+ return None
+
+ email = list(filter(None, email.split('\n')))
+ return email
+
+
+default_maintainers = _get_maintainers(ABI_POLICY) + \
+ _get_maintainers(MAINTAINERS)
+
+
+def get_maintainers(libpath):
+ '''Get the maintainers for given library'''
+ maintainers = _get_maintainers(libpath)
+
+ if maintainers is None:
+ maintainers = default_maintainers
+
+ return maintainers
+
+
+def get_message(library, symbols, config):
+ '''Build email message from symbols, config and maintainers'''
+ contributors = {}
+ message = {}
+ maintainers = get_maintainers(library)
+
+ if maintainers != default_maintainers:
+ message['CC'] = default_maintainers.copy()
+
+ if 'CC' in config:
+ message.setdefault('CC', []).append(config['CC'])
+
+ message['Subject'] = 'Expired symbols in {}\n'.format(library)
+
+ body = EMAIL_TEMPLATE
+ body += '{:<50}{:<25}{:<25}\n'.format('Symbol', 'Contributor', 'Email')
+ for sym in symbols:
+ body += ('{:<50}{:<25}{:<25}\n'.format(sym,
+ symbols[sym]['name'],
+ symbols[sym]['email']))
+ email = symbols[sym]['email']
+ contributors[email] = ''
+
+ contributors = list(contributors.keys())
+
+ message['To'] = maintainers + contributors
+ message['Body'] = body
+
+ return message
+
+
+class OutputEmail():
+ '''Format the output for email'''
+
+ def __init__(self, config):
+ self.config = config
+
+ self.terminal = OutputTerminal(config)
+ context = ssl.create_default_context()
+
+ # Try to log in to server and send email
+ try:
+ self.server = smtplib.SMTP(config['smtp_server'], 587)
+ self.server.starttls(context=context) # Secure the connection
+ self.server.login(config['sender'], config['password'])
+ except EnvironException as e:
+ _die_on_exception(e)
+
+ def message(self, message):
+ '''send email'''
+ self.terminal.message(message)
+
+ msg = EmailMessage()
+ msg.set_content(message.pop('Body'))
+
+ for key in message.keys():
+ msg[key] = message[key]
+
+ msg['From'] = self.config['sender']
+ msg['Reply-To'] = 'no-reply@dpdk.org'
+
+ self.server.send_message(msg)
+
+ time.sleep(1)
+
+ def __del__(self):
+ self.server.quit()
+
+
+class OutputTerminal(): # pylint: disable=too-few-public-methods
+ '''Format the output for the terminal'''
+
+ def __init__(self, config):
+ self.config = config
+
+ def message(self, message):
+ '''Print email to terminal'''
+
+ terminal = 'To:' + ', '.join(message['To']) + '\n'
+ if 'sender' in self.config.keys():
+ terminal += 'From:' + self.config['sender'] + '\n'
+
+ terminal += 'Reply-To:' + 'no-reply@dpdk.org' + '\n'
+
+ if 'CC' in message:
+ terminal += 'CC:' + ', '.join(message['CC']) + '\n'
+
+ terminal += 'Subject:' + message['Subject'] + '\n'
+ terminal += 'Body:' + message['Body'] + '\n'
+
+ print(terminal)
+ print('-' * 80)
+
+
+def parse_config(args):
+ '''put the command line args in the right places'''
+ config = {}
+ error_msg = None
+
+ outputs = {
+ None: OutputTerminal,
+ 'terminal': OutputTerminal,
+ 'email': OutputEmail
+ }
+
+ if args.format_output == 'email':
+ if args.smtp_server is None:
+ error_msg = 'SMTP server'
+ else:
+ config['smtp_server'] = args.smtp_server
+
+ if args.sender is None:
+ error_msg = 'sender'
+ else:
+ config['sender'] = args.sender
+
+ if args.password is None:
+ error_msg = 'password'
+ else:
+ config['password'] = args.password
+
+ if args.cc is not None:
+ config['CC'] = args.cc
+
+ if error_msg is not None:
+ print('Please specify a {} for email output'.format(error_msg))
+ return None
+
+ config['output'] = outputs[args.format_output]
+ return config
+
+
+def main():
+ '''Main entry point'''
+ parser = \
+ argparse.ArgumentParser(description=DESCRIPTION.format(s=__file__),
+ formatter_class=RawTextHelpFormatter)
+ parser.add_argument('--format-output',
+ choices=['terminal', 'email'],
+ default='terminal')
+ parser.add_argument('--smtp-server')
+ parser.add_argument('--password')
+ parser.add_argument('--sender')
+ parser.add_argument('--cc')
+
+ args = parser.parse_args()
+ config = parse_config(args)
+ if config is None:
+ return
+
+ symbols = {}
+ lastlib = library = ''
+
+ output = config['output'](config)
+
+ for line in sys.stdin:
+ line = line.rstrip('\n')
+
+ if line.find('mapfile') >= 0:
+ continue
+ library, symbol, name, email = line.split(',')
+
+ if library != lastlib:
+ message = get_message(lastlib, symbols, config)
+ output.message(message)
+ symbols = {}
+
+ lastlib = library
+ symbols[symbol] = {'name': name, 'email': email}
+
+ # print the last library
+ message = get_message(lastlib, symbols, config)
+ output.message(message)
+
+
+if __name__ == '__main__':
+ main()
--
2.26.2
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v11 1/3] devtools: script to track symbols over releases
2021-09-03 13:23 3% ` [dpdk-dev] [PATCH v11 " Ray Kinsella
@ 2021-09-03 13:23 5% ` Ray Kinsella
2021-09-03 13:23 5% ` [dpdk-dev] [PATCH v11 2/3] devtools: script to send notifications of expired symbols Ray Kinsella
2021-09-03 13:23 17% ` [dpdk-dev] [PATCH v11 3/3] maintainers: add new abi scripts Ray Kinsella
2 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-09-03 13:23 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr, aconole
This script tracks the growth of stable and experimental symbols
over releases since v19.11. The script has the ability to
count the added symbols between two dpdk releases, and to
list experimental symbols present in two dpdk releases
(expired symbols).
example usages:
Count symbols added since v19.11
$ devtools/symbol-tool.py count-symbols
Count symbols added since v20.11
$ devtools/symbol-tool.py count-symbols --releases v20.11,v21.05
List experimental symbols present in v20.11 and v21.05
$ devtools/symbol-tool.py list-expired --releases v20.11,v21.05
List experimental symbols in libraries only, present since v19.11
$ devtools/symbol-tool.py list-expired --directory lib
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
devtools/symbol-tool.py | 505 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 505 insertions(+)
create mode 100755 devtools/symbol-tool.py
diff --git a/devtools/symbol-tool.py b/devtools/symbol-tool.py
new file mode 100755
index 0000000000..a0b81c1b90
--- /dev/null
+++ b/devtools/symbol-tool.py
@@ -0,0 +1,505 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+# pylint: disable=invalid-name
+'''Tool to count or list symbols in each DPDK release'''
+from pathlib import Path
+import sys
+import os
+import subprocess
+import argparse
+from argparse import RawTextHelpFormatter
+import re
+import datetime
+try:
+ from parsley import makeGrammar
+except ImportError:
+ print('This script uses the package Parsley to parse C Mapfiles.\n'
+ 'This can be installed with \"pip install parsley".')
+ sys.exit()
+
+DESCRIPTION = '''
+This script tracks the growth of stable and experimental symbols
+over releases since v19.11. The script has the ability to
+count the added symbols between two dpdk releases, and to
+list experimental symbols present in two dpdk releases
+(expired symbols), including the name & email of the original contributor.
+
+example usages:
+
+Count symbols added since v19.11
+$ {s} count-symbols
+
+Count symbols added since v20.11
+$ {s} count-symbols --releases v20.11,v21.05
+
+List experimental symbols present in v20.11 and v21.05
+$ {s} list-expired --releases v20.11,v21.05
+
+List experimental symbols in libraries only, present since v19.11
+$ {s} list-expired --directory lib
+'''
+
+MAP_GRAMMAR = r"""
+
+ws = (' ' | '\r' | '\n' | '\t')*
+
+ABI_VER = ({})
+DPDK_VER = ('DPDK_' ABI_VER)
+ABI_NAME = ('INTERNAL' | 'EXPERIMENTAL' | DPDK_VER)
+comment = '#' (~'\n' anything)+ '\n'
+symbol = (~(';' | '}}' | '#') anything )+:c ';' -> ''.join(c)
+global = 'global:'
+local = 'local: *;'
+symbols = comment* symbol:s ws comment* -> s
+
+abi = (abi_section+):m -> dict(m)
+abi_section = (ws ABI_NAME:e ws '{{' ws global* (~local ws symbols)*:s ws local* ws '}}' ws DPDK_VER* ';' ws) -> (e,s)
+""" # noqa: E501
+
+
+def get_abi_versions():
+ '''Returns a string of possible dpdk abi versions'''
+
+ year = datetime.date.today().year - 2000
+ tags = " |".join(['\'{}\''.format(i)
+ for i in reversed(range(21, year + 1))])
+ tags = tags + ' | \'20.0.1\' | \'20.0\' | \'20\''
+
+ return tags
+
+
+def get_dpdk_releases():
+ '''Returns a list of dpdk release tags names since v19.11'''
+
+ year = datetime.date.today().year - 2000
+ year_range = "|".join("{}".format(i) for i in range(19, year + 1))
+ pattern = re.compile(r'^\"v(' + year_range + r')\.\d{2}\"$')
+
+ cmd = ['git', 'for-each-ref', '--sort=taggerdate', '--format', '"%(tag)"']
+ try:
+ result = subprocess.run(cmd,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ print("Failed to interogate git for release tags")
+ sys.exit()
+
+ tags = result.stdout.decode('utf-8').split('\n')
+
+ # find the non-rcs between now and v19.11
+ tags = [tag.replace('\"', '')
+ for tag in reversed(tags)
+ if pattern.match(tag)][:-3]
+
+ return tags
+
+
+def fix_directory_name(path):
+ '''Prepend librte to the source directory name'''
+ mapfilepath1 = str(path.parent.name)
+ mapfilepath2 = str(path.parents[1])
+ mapfilepath = mapfilepath2 + '/librte_' + mapfilepath1
+
+ return mapfilepath
+
+
+def directory_renamed(path, rel):
+ '''Fix removal of the librte_ from the directory names'''
+
+ mapfilepath = fix_directory_name(path)
+ tagfile = '{}:{}/{}'.format(rel, mapfilepath, path.name)
+
+ try:
+ result = subprocess.run(['git', 'show', tagfile],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ result = None
+
+ return result
+
+
+def mapfile_renamed(path, rel):
+ '''Fix renaming of the map file'''
+ newfile = None
+
+ result = subprocess.run(['git', 'ls-tree',
+ rel, str(path.parent) + '/'],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ dentries = result.stdout.decode('utf-8')
+ dentries = dentries.split('\n')
+
+ # filter entries looking for the map file
+ dentries = [dentry for dentry in dentries if dentry.endswith('.map')]
+ if len(dentries) > 1 or len(dentries) == 0:
+ return None
+
+ dparts = dentries[0].split('/')
+ newfile = dparts[len(dparts) - 1]
+
+ if newfile is not None:
+ tagfile = '{}:{}/{}'.format(rel, path.parent, newfile)
+
+ try:
+ result = subprocess.run(['git', 'show', tagfile],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ result = None
+
+ else:
+ result = None
+
+ return result
+
+
+def mapfile_and_directory_renamed(path, rel):
+ '''Fix renaming of the map file & the source directory'''
+ mapfilepath = Path("{}/{}".format(fix_directory_name(path), path.name))
+
+ return mapfile_renamed(mapfilepath, rel)
+
+
+FIX_STRATEGIES = [directory_renamed,
+ mapfile_renamed,
+ mapfile_and_directory_renamed]
+
+
+def get_symbols(map_parser, release, mapfile_path):
+ '''Count the symbols for a given release and mapfile'''
+ abi_sections = {}
+
+ tagfile = '{}:{}'.format(release, mapfile_path)
+ try:
+ result = subprocess.run(['git', 'show', tagfile],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ result = None
+
+ for fix_strategy in FIX_STRATEGIES:
+ if result is not None:
+ break
+ result = fix_strategy(mapfile_path, release)
+
+ if result is not None:
+ mapfile = result.stdout.decode('utf-8')
+ abi_sections = map_parser(mapfile).abi()
+
+ return abi_sections
+
+
+def get_terminal_rows():
+ '''Find the number of rows in the terminal'''
+
+ try:
+ return os.get_terminal_size().lines
+ except IOError:
+ return 0
+
+
+class SymbolOwner():
+ '''Find the symbols original contributors name and email'''
+ symbol_regex = {}
+ blame_regex = {'name': r'author\s(.*)',
+ 'email': r'author-mail\s<(.*)>'}
+
+ def __init__(self, libpath, symbol):
+ self.libpath = libpath
+ self.symbol = symbol
+
+ # find variable definitions in C files, and functions in headers.
+ self.symbol_regex = \
+ {'*.c': r'^(?!extern).*' + self.symbol + '[^()]*;',
+ '*.h': r'__rte_experimental(?:.*\n){0,2}.*' + self.symbol}
+
+ def find_symbol_location(self):
+ '''Find where the symbol is definited in the source'''
+ for key in self.symbol_regex:
+ for path in Path(self.libpath).rglob(key):
+ file_text = open(path).read()
+
+ # find where the symbol is defined, either preceeded by
+ # rte_experimental tag (functions)
+ # or followed by a ; (variables)
+
+ exp = self.symbol_regex[key]
+ pattern = re.compile(exp, re.MULTILINE)
+ search = pattern.search(file_text)
+
+ if search is not None:
+ symbol_pos = search.span()[1]
+ symbol_line = file_text.count('\n', 0, symbol_pos) + 1
+
+ return [str(path), symbol_line]
+ return None
+
+ def find_symbol_owner(self):
+ '''Find the symbols original contributors name and email'''
+ owners = {}
+ location = self.find_symbol_location()
+
+ if location is None:
+ return None
+
+ line = '-L {},{}'.format(location[1], location[1])
+ # git blame -p(orcelain) -L(ine) path
+ args = ['-p', line, location[0]]
+
+ try:
+ result = subprocess.run(['git', 'blame'] + args,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ return None
+
+ blame = result.stdout.decode('utf-8')
+ for key in self.blame_regex:
+ pattern = re.compile(self.blame_regex[key], re.MULTILINE)
+ match = pattern.search(blame)
+
+ owners[key] = match.groups()[0]
+
+ return owners
+
+
+class SymbolCountOutput():
+ '''Format the output to supported formats'''
+ output_fmt = ""
+ column_fmt = ""
+
+ def __init__(self, format_output, dpdk_releases):
+ self.OUTPUT_FORMATS[format_output](self, dpdk_releases)
+ self.column_titles = ['mapfile'] + dpdk_releases
+
+ self.terminal_rows = get_terminal_rows()
+ self.row = 0
+
+ def set_terminal_output(self, dpdk_rel):
+ '''Set the output format to Tabbed Separated Values'''
+
+ self.output_fmt = '{:<50}' + \
+ ''.join(['{:<6}{:<6}'] * (len(dpdk_rel)))
+ self.column_fmt = '{:50}' + \
+ ''.join(['{:<12}'] * (len(dpdk_rel)))
+
+ def set_csv_output(self, dpdk_rel):
+ '''Set the output format to Comma Separated Values'''
+
+ self.output_fmt = '{},' + \
+ ','.join(['{},{}'] * (len(dpdk_rel)))
+ self.column_fmt = '{},' + \
+ ','.join(['{},'] * (len(dpdk_rel)))
+
+ def print_columns(self):
+ '''Print column rows with release names'''
+ print(self.column_fmt.format(*self.column_titles))
+ self.row += 1
+
+ def print_row(self, mapfile, symbols):
+ '''Print row of symbol values'''
+ mapfile = str(mapfile)
+ print(self.output_fmt.format(*([mapfile] + symbols)))
+ self.row += 1
+
+ if((self.terminal_rows > 0) and
+ ((self.row % self.terminal_rows) == 0)):
+ self.print_columns()
+
+ OUTPUT_FORMATS = {None: set_terminal_output,
+ 'terminal': set_terminal_output,
+ 'csv': set_csv_output}
+
+
+class ListExpiredOutput():
+ '''Format the output to supported formats'''
+ output_fmt = ""
+ column_fmt = ""
+
+ def __init__(self, format_output, dpdk_releases):
+ self.terminal = True
+ self.OUTPUT_FORMATS[format_output](self, dpdk_releases)
+ self.column_titles = ['mapfile'] + \
+ ['expired (' + ','.join(dpdk_releases) + ')'] + \
+ ['contributor name', 'contributor email']
+
+ def set_terminal_output(self, _):
+ '''Set the output format to Tabbed Separated Values'''
+
+ self.output_fmt = '{:<50}{:<50}{:<25}{:<25}'
+ self.column_fmt = '{:50}{:50}{:25}{:25}'
+
+ def set_csv_output(self, _):
+ '''Set the output format to Comma Separated Values'''
+
+ self.output_fmt = '{},{},{},{}'
+ self.column_fmt = '{},{},{},{}'
+ self.terminal = False
+
+ def print_columns(self):
+ '''Print column rows with release names'''
+ print(self.column_fmt.format(*self.column_titles))
+
+ def print_row(self, mapfile, symbols, owner):
+ '''Print row of symbol values'''
+
+ for symbol in symbols:
+ mapfile = str(mapfile)
+ name = owner[symbol]['name'] \
+ if owner[symbol] is not None else ''
+ email = owner[symbol]['email'] \
+ if owner[symbol] is not None else ''
+
+ print(self.output_fmt.format(mapfile, symbol, name, email))
+ if self.terminal:
+ mapfile = ''
+
+ OUTPUT_FORMATS = {None: set_terminal_output,
+ 'terminal': set_terminal_output,
+ 'csv': set_csv_output}
+
+
+class CountSymbolsAction:
+ ''' Logic to count symbols added since a give release '''
+ IGNORE_SECTIONS = ['EXPERIMENTAL', 'INTERNAL']
+
+ def __init__(self, mapfile_path, mapfile_parser, format_output):
+ self.path = mapfile_path
+ self.parser = mapfile_parser
+ self.format_output = format_output
+ self.symbols_count = []
+
+ def add_mapfile(self, release):
+ ''' add a version mapfile '''
+ symbol_count = experimental_count = 0
+
+ symbols = get_symbols(self.parser, release, self.path)
+
+ # which versions are present, and we care about
+ abi_vers = [abi_ver
+ for abi_ver in symbols
+ if abi_ver not in self.IGNORE_SECTIONS]
+
+ for abi_ver in abi_vers:
+ symbol_count += len(symbols[abi_ver])
+
+ # count experimental symbols
+ if 'EXPERIMENTAL' in symbols.keys():
+ experimental_count = len(symbols['EXPERIMENTAL'])
+
+ self.symbols_count += [symbol_count, experimental_count]
+
+ def __del__(self):
+ self.format_output.print_row(self.path.parent, self.symbols_count)
+
+
+class ListExpiredAction:
+ ''' Logic to list expired symbols between two releases '''
+
+ def __init__(self, mapfile_path, mapfile_parser, format_output):
+ self.path = mapfile_path
+ self.parser = mapfile_parser
+ self.format_output = format_output
+ self.experimental_symbols = []
+
+ def add_mapfile(self, release):
+ ''' add a version mapfile '''
+ symbols = get_symbols(self.parser, release, self.path)
+
+ if 'EXPERIMENTAL' in symbols.keys():
+ experimental = [exp.strip() for exp in symbols['EXPERIMENTAL']]
+
+ self.experimental_symbols.append(experimental)
+
+ def __del__(self):
+ if len(self.experimental_symbols) != 2:
+ return
+
+ tmp = self.experimental_symbols
+ # find symbols present in both dpdk releases
+ intersect_syms = [sym for sym in tmp[0] if sym in tmp[1]]
+
+ # check for empty set
+ if intersect_syms == []:
+ return
+
+ sym_owner = {}
+ for sym in intersect_syms:
+ sym_owner[sym] = \
+ SymbolOwner(self.path.parent, sym).find_symbol_owner()
+
+ self.format_output.print_row(self.path.parent,
+ intersect_syms,
+ sym_owner)
+
+
+SRC_DIRECTORIES = 'drivers,lib'
+
+ACTIONS = {None: CountSymbolsAction,
+ 'count-symbols': CountSymbolsAction,
+ 'list-expired': ListExpiredAction}
+
+ACTION_OUTPUT = {None: SymbolCountOutput,
+ 'count-symbols': SymbolCountOutput,
+ 'list-expired': ListExpiredOutput}
+
+
+def main():
+ '''Main entry point'''
+
+ dpdk_releases = get_dpdk_releases()
+
+ parser = \
+ argparse.ArgumentParser(description=DESCRIPTION.format(s=__file__),
+ formatter_class=RawTextHelpFormatter)
+
+ parser.add_argument('mode', choices=['count-symbols', 'list-expired'])
+ parser.add_argument('--format-output', choices=['terminal', 'csv'],
+ default='terminal')
+ parser.add_argument('--directory', choices=SRC_DIRECTORIES.split(','),
+ default=SRC_DIRECTORIES)
+ parser.add_argument('--releases',
+ help='2 x comma separated release tags e.g. \''
+ + ','.join([dpdk_releases[0], dpdk_releases[-1]])
+ + '\'')
+ args = parser.parse_args()
+
+ if args.releases is not None:
+ dpdk_releases = args.releases.split(',')
+
+ if args.mode == 'list-expired':
+ if len(dpdk_releases) < 2:
+ sys.exit('Please specify two releases to compare '
+ 'in \'list-expired\' mode.')
+ dpdk_releases = [dpdk_releases[0],
+ dpdk_releases[len(dpdk_releases) - 1]]
+
+ action = ACTIONS[args.mode]
+ format_output = ACTION_OUTPUT[args.mode](args.format_output, dpdk_releases)
+
+ map_grammar = MAP_GRAMMAR.format(get_abi_versions())
+ map_parser = makeGrammar(map_grammar, {})
+
+ format_output.print_columns()
+
+ for src_dir in args.directory.split(','):
+ for path in Path(src_dir).rglob('*.map'):
+ release_action = action(path, map_parser, format_output)
+
+ for release in dpdk_releases:
+ release_action.add_mapfile(release)
+
+ # all the magic happens in the destructor
+ del release_action
+
+
+if __name__ == '__main__':
+ main()
--
2.26.2
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v11 0/3] devtools: scripts to count and track symbols
2021-08-31 14:50 3% ` [dpdk-dev] [PATCH v10 0/3] devtools: scripts to count and track symbols Ray Kinsella
@ 2021-09-03 13:23 3% ` Ray Kinsella
2021-09-03 13:23 5% ` [dpdk-dev] [PATCH v11 1/3] devtools: script to track symbols over releases Ray Kinsella
` (2 more replies)
2021-09-08 15:12 3% ` [dpdk-dev] [PATCH v11 0/3] devtools: scripts to count and track symbols Ray Kinsella
` (2 subsequent siblings)
4 siblings, 3 replies; 200+ results
From: Ray Kinsella @ 2021-09-03 13:23 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr, aconole
Scripts to count and track the lifecycle of DPDK symbols.
The symbol-tool script reports on the growth of symbols over releases
and list expired symbols. The notify-symbol-maintainers script
consumes the input from symbol-tool and generates email notifications
of expired symbols.
v2: reworked to fix pylint errors
v3: sent with the correct in-reply-to
v4: fix typos picked up by the CI
v5: fix terminal_size & directory args
v6: added list-expired, to list expired experimental symbols
v7: fix typo in comments
v8: added tool to notify maintainers of expired symbols
v9: removed hardcoded emails addressed and script names
v10: added ability to identify and notify the original contributors
v11: addressed feedback from Aaron Conole, including PEP8 errors.
Ray Kinsella (3):
devtools: script to track symbols over releases
devtools: script to send notifications of expired symbols
maintainers: add new abi scripts
MAINTAINERS | 2 +
devtools/notify-symbol-maintainers.py | 302 +++++++++++++++
devtools/symbol-tool.py | 505 ++++++++++++++++++++++++++
3 files changed, 809 insertions(+)
create mode 100755 devtools/notify-symbol-maintainers.py
create mode 100755 devtools/symbol-tool.py
--
2.26.2
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v1 3/7] eal/interrupts: avoid direct access to interrupt handle
2021-09-03 12:40 4% ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
@ 2021-09-03 12:40 1% ` Harman Kalra
0 siblings, 0 replies; 200+ results
From: Harman Kalra @ 2021-09-03 12:40 UTC (permalink / raw)
To: dev, Harman Kalra, Bruce Richardson
Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field. Direct access to any of the fields
should be avoided to avoid any ABI breakage in future.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/freebsd/eal_interrupts.c | 94 ++++++----
lib/eal/linux/eal_interrupts.c | 294 +++++++++++++++++++------------
2 files changed, 242 insertions(+), 146 deletions(-)
diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 86810845fe..171006f19f 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -40,7 +40,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -60,7 +60,7 @@ static int
intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
{
/* alarm callbacks are special case */
- if (ih->type == RTE_INTR_HANDLE_ALARM) {
+ if (rte_intr_handle_type_get(ih) == RTE_INTR_HANDLE_ALARM) {
uint64_t timeout_ns;
/* get soonest alarm timeout */
@@ -75,7 +75,7 @@ intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
} else {
ke->filter = EVFILT_READ;
}
- ke->ident = ih->fd;
+ ke->ident = rte_intr_handle_fd_get(ih);
return 0;
}
@@ -89,7 +89,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
int ret = 0, add_event = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -103,7 +104,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* find the source for this intr_handle */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
}
@@ -112,8 +114,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
* thing on the list should be eal_alarm_callback() and we may
* be called just to reset the timer.
*/
- if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM &&
- !TAILQ_EMPTY(&src->callbacks)) {
+ if (src != NULL && rte_intr_handle_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM && !TAILQ_EMPTY(&src->callbacks)) {
callback = NULL;
} else {
/* allocate a new interrupt callback entity */
@@ -135,9 +137,20 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
ret = -ENOMEM;
goto fail;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ src->intr_handle =
+ rte_intr_handle_instance_alloc(
+ RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_handle_instance_index_set(
+ src->intr_handle, intr_handle, 0);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&intr_sources, src,
+ next);
+ }
}
}
@@ -151,7 +164,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* add events to the queue. timer events are special as we need to
* re-set the timer.
*/
- if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) {
+ if (add_event || rte_intr_handle_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM) {
struct kevent ke;
memset(&ke, 0, sizeof(ke));
@@ -173,12 +187,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
*/
if (errno == ENODEV)
RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
- src->intr_handle.fd);
+ rte_intr_handle_fd_get(src->intr_handle));
else
RTE_LOG(ERR, EAL, "Error adding fd %d "
- "kevent, %s\n",
- src->intr_handle.fd,
- strerror(errno));
+ "kevent, %s\n",
+ rte_intr_handle_fd_get(
+ src->intr_handle),
+ strerror(errno));
ret = -errno;
goto fail;
}
@@ -213,7 +228,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -228,7 +243,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -268,7 +284,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -282,7 +298,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -314,7 +331,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_handle_fd_get(src->intr_handle),
+ strerror(errno));
/* removing non-existent even is an expected condition
* in some circumstances (e.g. oneshot events).
*/
@@ -365,17 +383,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_handle_fd_get(intr_handle) < 0 ||
+ rte_intr_handle_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -388,7 +407,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
rc = -1;
break;
}
@@ -406,17 +425,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_handle_fd_get(intr_handle) < 0 ||
+ rte_intr_handle_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -429,7 +449,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
rc = -1;
break;
}
@@ -441,7 +461,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (intr_handle &&
+ rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 0;
return -1;
@@ -463,7 +484,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == event_fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ event_fd)
break;
if (src == NULL) {
rte_spinlock_unlock(&intr_lock);
@@ -475,7 +497,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_handle_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_ALARM:
bytes_read = 0;
call = true;
@@ -546,7 +568,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
/* mark for deletion from the queue */
ke.flags = EV_DELETE;
- if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) {
+ if (intr_source_to_kevent(src->intr_handle,
+ &ke) < 0) {
RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
rte_spinlock_unlock(&intr_lock);
return;
@@ -557,7 +580,9 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
- "%s\n", src->intr_handle.fd,
+ "%s\n",
+ rte_intr_handle_fd_get(
+ src->intr_handle),
strerror(errno));
/* removing non-existent even is an expected
* condition in some circumstances
@@ -567,7 +592,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
}
}
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 22b3b7bcd9..570eddf088 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -20,6 +20,7 @@
#include <stdbool.h>
#include <rte_common.h>
+#include <rte_epoll.h>
#include <rte_interrupts.h>
#include <rte_memory.h>
#include <rte_launch.h>
@@ -82,7 +83,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -112,7 +113,7 @@ static int
vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
int *fd_ptr;
len = sizeof(irq_set_buf);
@@ -125,13 +126,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_handle_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -144,11 +146,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
return 0;
@@ -159,7 +161,7 @@ static int
vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -171,11 +173,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -187,11 +190,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL,
- "Error disabling INTx interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling INTx interrupts for fd %d\n",
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
return 0;
@@ -202,6 +206,7 @@ static int
vfio_ack_intx(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set irq_set;
+ int vfio_dev_fd;
/* unmask INTx */
memset(&irq_set, 0, sizeof(irq_set));
@@ -211,9 +216,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
irq_set.index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set.start = 0;
- if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
return 0;
@@ -225,7 +231,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -236,13 +242,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_handle_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
return 0;
@@ -253,7 +260,7 @@ static int
vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -264,11 +271,13 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI interrupts for fd %d\n",
+ rte_intr_handle_fd_get(intr_handle));
return ret;
}
@@ -279,30 +288,34 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd, i;
len = sizeof(irq_set_buf);
irq_set = (struct vfio_irq_set *) irq_set_buf;
irq_set->argsz = len;
/* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */
- irq_set->count = intr_handle->max_intr ?
- (intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ?
- RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1;
+ irq_set->count = rte_intr_handle_max_intr_get(intr_handle) ?
+ (rte_intr_handle_max_intr_get(intr_handle) >
+ RTE_MAX_RXTX_INTR_VEC_ID + 1 ? RTE_MAX_RXTX_INTR_VEC_ID + 1 :
+ rte_intr_handle_max_intr_get(intr_handle)) : 1;
+
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
/* INTR vector offset 0 reserve for non-efds mapping */
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd;
- memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds,
- sizeof(*intr_handle->efds) * intr_handle->nb_efd);
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_handle_fd_get(intr_handle);
+ for (i = 0; i < rte_intr_handle_nb_efd_get(intr_handle); i++)
+ fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] =
+ rte_intr_handle_efds_index_get(intr_handle, i);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -314,7 +327,7 @@ static int
vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -325,11 +338,13 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI-X interrupts for fd %d\n",
+ rte_intr_handle_fd_get(intr_handle));
return ret;
}
@@ -342,7 +357,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -354,13 +369,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_handle_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -373,7 +389,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -384,11 +400,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return ret;
}
@@ -399,20 +416,22 @@ static int
uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* disable interrupts */
command_high |= 0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -423,20 +442,22 @@ static int
uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* enable interrupts */
command_high &= ~0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -448,10 +469,11 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
{
const int value = 0;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_handle_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_handle_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -462,10 +484,11 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
{
const int value = 1;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_handle_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_handle_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -482,7 +505,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
wake_thread = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -503,7 +527,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* check if there is at least one callback registered for the fd */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd) {
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle)) {
/* we had no interrupts for this */
if (TAILQ_EMPTY(&src->callbacks))
wake_thread = 1;
@@ -522,12 +547,22 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
free(callback);
ret = -ENOMEM;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
- wake_thread = 1;
- ret = 0;
+ src->intr_handle = rte_intr_handle_instance_alloc(
+ RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_handle_instance_index_set(
+ src->intr_handle, intr_handle, 0);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&(src->callbacks), callback,
+ next);
+ TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ wake_thread = 1;
+ ret = 0;
+ }
}
}
@@ -555,7 +590,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -565,7 +600,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -605,7 +641,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -615,7 +651,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -646,6 +683,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_handle_instance_free(src->intr_handle);
free(src);
}
}
@@ -677,22 +715,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
int
rte_intr_enable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (rte_intr_handle_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* write to the uio fd to enable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -734,7 +773,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
rc = -1;
break;
}
@@ -757,13 +796,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ int uio_cfg_fd;
+
+ if (intr_handle && rte_intr_handle_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VDEV)
return 0;
- if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0)
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (!intr_handle || rte_intr_handle_fd_get(intr_handle) < 0 ||
+ uio_cfg_fd < 0)
return -1;
- switch (intr_handle->type) {
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* Both acking and enabling are same for UIO */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -796,7 +840,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
/* unknown handle type */
default:
RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -806,22 +850,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
int
rte_intr_disable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (rte_intr_handle_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* write to the uio fd to disable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_disable(intr_handle))
@@ -863,7 +908,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
rc = -1;
break;
}
@@ -896,7 +941,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
}
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd ==
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
events[n].data.fd)
break;
if (src == NULL){
@@ -909,7 +954,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_handle_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -973,6 +1018,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
free(cb);
}
+ rte_intr_handle_instance_free(src->intr_handle);
free(src);
return -1;
} else if (bytes_read == 0)
@@ -1012,7 +1058,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
if (cb->pending_delete) {
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
rv++;
}
@@ -1021,6 +1068,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_handle_instance_free(src->intr_handle);
free(src);
}
@@ -1123,16 +1171,18 @@ eal_intr_thread_main(__rte_unused void *arg)
continue; /* skip those with no callbacks */
memset(&ev, 0, sizeof(ev));
ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP;
- ev.data.fd = src->intr_handle.fd;
+ ev.data.fd = rte_intr_handle_fd_get(src->intr_handle);
/**
* add all the uio device file descriptor
* into wait list.
*/
if (epoll_ctl(pfd, EPOLL_CTL_ADD,
- src->intr_handle.fd, &ev) < 0){
+ rte_intr_handle_fd_get(src->intr_handle),
+ &ev) < 0) {
rte_panic("Error adding fd %d epoll_ctl, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_handle_fd_get(src->intr_handle),
+ strerror(errno));
}
else
numfds++;
@@ -1185,7 +1235,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
int bytes_read = 0;
int nbytes;
- switch (intr_handle->type) {
+ switch (rte_intr_handle_type_get(intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -1198,7 +1248,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
break;
#endif
case RTE_INTR_HANDLE_VDEV:
- bytes_read = intr_handle->efd_counter_size;
+ bytes_read = rte_intr_handle_efd_counter_size_get(intr_handle);
/* For vdev, number of bytes to read is set by driver */
break;
case RTE_INTR_HANDLE_EXT:
@@ -1419,8 +1469,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- if (!intr_handle || intr_handle->nb_efd == 0 ||
- efd_idx >= intr_handle->nb_efd) {
+ if (!intr_handle || rte_intr_handle_nb_efd_get(intr_handle) == 0 ||
+ efd_idx >= (unsigned int)rte_intr_handle_nb_efd_get(intr_handle)) {
RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
return -EPERM;
}
@@ -1428,7 +1478,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
switch (op) {
case RTE_INTR_EVENT_ADD:
epfd_op = EPOLL_CTL_ADD;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_handle_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) != RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event already been added.\n");
@@ -1442,7 +1492,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr;
epdata->cb_arg = (void *)intr_handle;
rc = rte_epoll_ctl(epfd, epfd_op,
- intr_handle->efds[efd_idx], rev);
+ rte_intr_handle_efds_index_get(intr_handle,
+ efd_idx),
+ rev);
if (!rc)
RTE_LOG(DEBUG, EAL,
"efd %d associated with vec %d added on epfd %d"
@@ -1452,7 +1504,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
break;
case RTE_INTR_EVENT_DEL:
epfd_op = EPOLL_CTL_DEL;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_handle_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event does not exist.\n");
@@ -1477,8 +1529,9 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
uint32_t i;
struct rte_epoll_event *rev;
- for (i = 0; i < intr_handle->nb_efd; i++) {
- rev = &intr_handle->elist[i];
+ for (i = 0; i < (uint32_t)rte_intr_handle_nb_efd_get(intr_handle);
+ i++) {
+ rev = rte_intr_handle_elist_index_get(intr_handle, i);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID)
continue;
@@ -1498,7 +1551,8 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
assert(nb_efd != 0);
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) {
+ if (rte_intr_handle_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VFIO_MSIX) {
for (i = 0; i < n; i++) {
fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
if (fd < 0) {
@@ -1507,21 +1561,34 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
errno, strerror(errno));
return -errno;
}
- intr_handle->efds[i] = fd;
+
+ if (rte_intr_handle_efds_index_set(intr_handle, i, fd))
+ return -rte_errno;
}
- intr_handle->nb_efd = n;
- intr_handle->max_intr = NB_OTHER_INTR + n;
- } else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+
+ if (rte_intr_handle_nb_efd_set(intr_handle, n))
+ return -rte_errno;
+
+ if (rte_intr_handle_max_intr_set(intr_handle,
+ NB_OTHER_INTR + n))
+ return -rte_errno;
+ } else if (rte_intr_handle_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VDEV) {
/* only check, initialization would be done in vdev driver.*/
- if (intr_handle->efd_counter_size >
- sizeof(union rte_intr_read_buffer)) {
+ if ((uint64_t)rte_intr_handle_efd_counter_size_get(intr_handle)
+ > sizeof(union rte_intr_read_buffer)) {
RTE_LOG(ERR, EAL, "the efd_counter_size is oversized");
return -EINVAL;
}
} else {
- intr_handle->efds[0] = intr_handle->fd;
- intr_handle->nb_efd = RTE_MIN(nb_efd, 1U);
- intr_handle->max_intr = NB_OTHER_INTR;
+ if (rte_intr_handle_efds_index_set(intr_handle, 0,
+ rte_intr_handle_fd_get(intr_handle)))
+ return -rte_errno;
+ if (rte_intr_handle_nb_efd_set(intr_handle,
+ RTE_MIN(nb_efd, 1U)))
+ return -rte_errno;
+ if (rte_intr_handle_max_intr_set(intr_handle, NB_OTHER_INTR))
+ return -rte_errno;
}
return 0;
@@ -1533,18 +1600,20 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
uint32_t i;
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->max_intr > intr_handle->nb_efd) {
- for (i = 0; i < intr_handle->nb_efd; i++)
- close(intr_handle->efds[i]);
+ if (rte_intr_handle_max_intr_get(intr_handle) >
+ rte_intr_handle_nb_efd_get(intr_handle)) {
+ for (i = 0; i <
+ (uint32_t)rte_intr_handle_nb_efd_get(intr_handle); i++)
+ close(rte_intr_handle_efds_index_get(intr_handle, i));
}
- intr_handle->nb_efd = 0;
- intr_handle->max_intr = 0;
+ rte_intr_handle_nb_efd_set(intr_handle, 0);
+ rte_intr_handle_max_intr_set(intr_handle, 0);
}
int
rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
{
- return !(!intr_handle->nb_efd);
+ return !(!rte_intr_handle_nb_efd_get(intr_handle));
}
int
@@ -1553,16 +1622,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
if (!rte_intr_dp_is_en(intr_handle))
return 1;
else
- return !!(intr_handle->max_intr - intr_handle->nb_efd);
+ return !!(rte_intr_handle_max_intr_get(intr_handle) -
+ rte_intr_handle_nb_efd_get(intr_handle));
}
int
rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
{
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX)
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX)
return 1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 1;
return 0;
--
2.18.0
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal
2021-08-26 14:57 4% [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
2021-08-26 14:57 1% ` [dpdk-dev] [RFC 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
@ 2021-09-03 12:40 4% ` Harman Kalra
2021-09-03 12:40 1% ` [dpdk-dev] [PATCH v1 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
1 sibling, 1 reply; 200+ results
From: Harman Kalra @ 2021-09-03 12:40 UTC (permalink / raw)
To: dev; +Cc: Harman Kalra
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.
Details on each patch of the series:
Patch 1: eal: interrupt handle API prototypes
This patch provides prototypes of all the new get set APIs, and
also rearranges the headers related to interrupt framework. Epoll
related definitions prototypes are moved into a new header i.e.
rte_epoll.h and APIs defined in rte_eal_interrupts.h which were
driver specific are moved to rte_interrupts.h (as anyways it was
accessible and used outside DPDK library. Later in the series
rte_eal_interrupts.h is removed.
Patch 2: eal/interrupts: implement get set APIs
Implementing all get, set and alloc APIs. Alloc APIs are implemented
to allocate memory for interrupt handle instance. Currently most of
the drivers defines interrupt handle instance as static but now it cant
be static as size of rte_intr_handle is unknown to all the drivers.
Drivers are expected to allocate interrupt instances during initialization
and free these instances during cleanup phase.
Patch 3: eal/interrupts: avoid direct access to interrupt handle
Modifying the interrupt framework for linux and freebsd to use these
get set alloc APIs as per requirement and avoid accessing the fields
directly.
Patch 4: test/interrupt: apply get set interrupt handle APIs
Updating interrupt test suite to use interrupt handle APIs.
Patch 5: drivers: remove direct access to interrupt handle fields
Modifying all the drivers and libraries which are currently directly
accessing the interrupt handle fields. Drivers are expected to
allocated the interrupt instance, use get set APIs with the allocated
interrupt handle and free it on cleanup.
Patch 6: eal/interrupts: make interrupt handle structure opaque
In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
definition is moved to c file to make it completely opaque. As part of
interrupt handle allocation, array like efds and elist(which are currently
static) are dynamically allocated with default size
(RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
device requirement using new API rte_intr_handle_event_list_update().
Eg, on PCI device probing MSIX size can be queried and these arrays can
be reallocated accordingly.
Patch 7: eal/alarm: introduce alarm fini routine
Introducing alarm fini routine, as the memory allocated for alarm interrupt
instance can be freed in alarm fini.
Testing performed:
1. Validated the series by running interrupts and alarm test suite.
2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
where interrupts are expected on packet arrival.
v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif
Harman Kalra (7):
eal: interrupt handle API prototypes
eal/interrupts: implement get set APIs
eal/interrupts: avoid direct access to interrupt handle
test/interrupt: apply get set interrupt handle APIs
drivers: remove direct access to interrupt handle fields
eal/interrupts: make interrupt handle structure opaque
eal/alarm: introduce alarm fini routine
MAINTAINERS | 1 +
app/test/test_interrupts.c | 237 +++---
drivers/baseband/acc100/rte_acc100_pmd.c | 18 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 13 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 14 +-
drivers/bus/auxiliary/auxiliary_common.c | 2 +
drivers/bus/auxiliary/linux/auxiliary.c | 11 +
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 17 +-
drivers/bus/fslmc/fslmc_vfio.c | 32 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 21 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 16 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 21 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 73 +-
drivers/bus/pci/linux/pci_vfio.c | 115 ++-
drivers/bus/pci/pci_common.c | 29 +-
drivers/bus/pci/pci_common_uio.c | 21 +-
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 7 +
drivers/bus/vmbus/linux/vmbus_uio.c | 37 +-
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 24 +-
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +-
drivers/common/cnxk/roc_irq.c | 106 +--
drivers/common/cnxk/roc_nix_irq.c | 37 +-
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 34 +
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +-
drivers/common/octeontx2/otx2_irq.c | 117 +--
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 22 +-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 32 +-
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 47 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 24 +-
drivers/net/e1000/igb_ethdev.c | 84 ++-
drivers/net/ena/ena_ethdev.c | 36 +-
drivers/net/enic/enic_main.c | 27 +-
drivers/net/failsafe/failsafe.c | 24 +-
drivers/net/failsafe/failsafe_intr.c | 45 +-
drivers/net/failsafe/failsafe_ops.c | 23 +-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 50 +-
drivers/net/hns3/hns3_ethdev_vf.c | 57 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 55 +-
drivers/net/i40e/i40e_ethdev_vf.c | 43 +-
drivers/net/iavf/iavf_ethdev.c | 41 +-
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 23 +-
drivers/net/ice/ice_ethdev.c | 51 +-
drivers/net/igc/igc_ethdev.c | 47 +-
drivers/net/ionic/ionic_ethdev.c | 12 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 70 +-
drivers/net/memif/memif_socket.c | 114 ++-
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 63 +-
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 20 +-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 48 +-
drivers/net/mlx5/linux/mlx5_os.c | 56 +-
drivers/net/mlx5/linux/mlx5_socket.c | 26 +-
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 43 +-
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 27 +-
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 28 +-
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 31 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
drivers/net/qede/qede_ethdev.c | 16 +-
drivers/net/sfc/sfc_intr.c | 29 +-
drivers/net/tap/rte_eth_tap.c | 37 +-
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 33 +-
drivers/net/thunderx/nicvf_ethdev.c | 13 +
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 36 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 35 +-
drivers/net/vhost/rte_eth_vhost.c | 78 +-
drivers/net/virtio/virtio_ethdev.c | 17 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 53 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 45 +-
drivers/raw/ifpga/ifpga_rawdev.c | 42 +-
drivers/raw/ntb/ntb.c | 10 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 11 +
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 46 +-
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/common/eal_common_interrupts.c | 668 +++++++++++++++++
lib/eal/common/eal_private.h | 11 +
lib/eal/common/meson.build | 2 +
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 56 +-
lib/eal/freebsd/eal_interrupts.c | 94 ++-
lib/eal/include/meson.build | 2 +-
lib/eal/include/rte_eal_interrupts.h | 269 -------
lib/eal/include/rte_eal_trace.h | 24 +-
lib/eal/include/rte_epoll.h | 116 +++
lib/eal/include/rte_interrupts.h | 673 +++++++++++++++++-
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 39 +-
lib/eal/linux/eal_dev.c | 65 +-
lib/eal/linux/eal_interrupts.c | 294 +++++---
lib/eal/version.map | 30 +
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +-
132 files changed, 3797 insertions(+), 1685 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
create mode 100644 lib/eal/include/rte_epoll.h
--
2.18.0
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v10 2/3] devtools: script to send notifications of expired symbols
2021-09-01 12:46 0% ` Aaron Conole
@ 2021-09-03 11:15 0% ` Kinsella, Ray
2021-09-03 13:32 0% ` Kinsella, Ray
1 sibling, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-09-03 11:15 UTC (permalink / raw)
To: Aaron Conole
Cc: dev, bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor
On 01/09/2021 13:46, Aaron Conole wrote:
> Ray Kinsella <mdr@ashroe.eu> writes:
>
>> Use this script with the output of the DPDK symbol tool, to notify
>> maintainers of expired symbols by email. You need to define the environment
>> variable DPDK_GETMAINTAINER_PATH for this tool to work.
>>
>> Use terminal output to review the emails before sending.
>> e.g.
>> $ devtools/symbol-tool.py list-expired --format-output csv \
>> | DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
>> devtools/notify_expired_symbols.py --format-output terminal
>>
>> Then use email output to send the emails to the maintainers.
>> e.g.
>> $ devtools/symbol-tool.py list-expired --format-output csv \
>> | DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
>> devtools/notify_expired_symbols.py --format-output email \
>> --smtp-server <server> --sender <someone@somewhere.com> \
>> --password <password> --cc <someone@somewhere.com>
>>
>> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
>> ---
>> devtools/notify-symbol-maintainers.py | 256 ++++++++++++++++++++++++++
>> 1 file changed, 256 insertions(+)
>> create mode 100755 devtools/notify-symbol-maintainers.py
>>
>> diff --git a/devtools/notify-symbol-maintainers.py b/devtools/notify-symbol-maintainers.py
>> new file mode 100755
>> index 0000000000..ee554687ff
>> --- /dev/null
>> +++ b/devtools/notify-symbol-maintainers.py
>> @@ -0,0 +1,256 @@
>> +#!/usr/bin/env python3
>> +# SPDX-License-Identifier: BSD-3-Clause
>> +# Copyright(c) 2021 Intel Corporation
>> +# pylint: disable=invalid-name
>> +'''Tool to notify maintainers of expired symbols'''
>> +import smtplib
>> +import ssl
>> +import sys
>> +import subprocess
>> +import argparse
>> +from argparse import RawTextHelpFormatter
>> +import time
>> +from email.message import EmailMessage
>> +
>> +DESCRIPTION = '''
>> +Use this script with the output of the DPDK symbol tool, to notify maintainers
>> +and contributors of expired symbols by email. You need to define the environment
>> +variable DPDK_GETMAINTAINER_PATH for this tool to work.
>> +
>> +Use terminal output to review the emails before sending.
>> +e.g.
>> +$ devtools/symbol-tool.py list-expired --format-output csv \\
>> +| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
>> +{s} --format-output terminal
>> +
>> +Then use email output to send the emails to the maintainers.
>> +e.g.
>> +$ devtools/symbol-tool.py list-expired --format-output csv \\
>> +| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
>> +{s} --format-output email \\
>> +--smtp-server <server> --sender <someone@somewhere.com> --password <password> \\
>> +--cc <someone@somewhere.com>
>> +'''
>> +
>> +EMAIL_TEMPLATE = '''Hi there,
>> +
>> +Please note the symbols listed below have expired. In line with the DPDK ABI
>> +policy, they should be scheduled for removal, in the next DPDK release.
>> +
>> +For more information, please see the DPDK ABI Policy, section 3.5.3.
>> +https://doc.dpdk.org/guides/contributing/abi_policy.html
>> +
>> +Thanks,
>> +
>> +The DPDK Symbol Bot
>> +
>> +'''
>> +
>> +ABI_POLICY = 'doc/guides/contributing/abi_policy.rst'
>> +MAINTAINERS = 'MAINTAINERS'
>> +get_maintainer = ['devtools/get-maintainer.sh', \
>> + '--email', '-f']
>
> Maybe it's best to make this something that can be overridden. There's
> a series to change the .sh files to .py files. Perhaps an environment
> variable or argument?
ACK
>
>> +def _get_maintainers(libpath):
>> + '''Get the maintainers for given library'''
>> + try:
>> + cmd = get_maintainer + [libpath]
>> + result = subprocess.run(cmd, \
>> + stdout=subprocess.PIPE, \
>> + stderr=subprocess.PIPE,
>> + check=True)
>> + except subprocess.CalledProcessError:
>> + return None
>
> You might consider handling
>
> except FileNotFoundError:
> ....
>
> With a graceful exit and error message. In case the get_maintainers
> path changes.
ACK
>
>> + if result is None:
>> + return None
>> +
>> + email = result.stdout.decode('utf-8')
>> + if email == '':
>> + return None
>> +
>> + email = list(filter(None,email.split('\n')))
>> + return email
>> +
>> +default_maintainers = _get_maintainers(ABI_POLICY) + \
>> + _get_maintainers(MAINTAINERS)
>> +
>> +def get_maintainers(libpath):
>> + '''Get the maintainers for given library'''
>> + maintainers=_get_maintainers(libpath)
>> +
>> + if maintainers is None:
>> + maintainers = default_maintainers
>> +
>> + return maintainers
>> +
>> +def get_message(library, symbols, config):
>> + '''Build email message from symbols, config and maintainers'''
>> + contributors = {}
>> + message = {}
>> + maintainers = get_maintainers(library)
>> +
>> + if maintainers != default_maintainers:
>> + message['CC'] = default_maintainers.copy()
>> +
>> + if 'CC' in config:
>> + message.setdefault('CC',[]).append(config['CC'])
>> +
>> + message['Subject'] = 'Expired symbols in {}\n'.format(library)
>> +
>> + body = EMAIL_TEMPLATE
>> + body += '{:<50}{:<25}{:<25}\n'.format('Symbol','Contributor','Email')
>> + for sym in symbols:
>> + body += ('{:<50}{:<25}{:<25}\n'.format(sym,\
>> + symbols[sym]['name'],
>> + symbols[sym]['email'],
>> + ))
>> + email = symbols[sym]['email']
>> + contributors[email] = ''
>> +
>> + contributors = list(contributors.keys())
>> +
>> + message['To'] = maintainers + contributors
>> + message['Body'] = body
>> +
>> + return message
>> +
>> +class OutputEmail():
>> + '''Format the output for email'''
>> + def __init__(self, config):
>> + self.config = config
>> +
>> + self.terminal = OutputTerminal(config)
>> + context = ssl.create_default_context()
>> +
>> + # Try to log in to server and send email
>> + try:
>> + self.server = smtplib.SMTP(config['smtp_server'], 587)
>> + self.server.starttls(context=context) # Secure the connection
>> + self.server.login(config['sender'], config['password'])
>> + except Exception as exception:
>> + print(exception)
>> + raise exception
>> +
>> + def message(self,message):
>> + '''send email'''
>> + self.terminal.message(message)
>> +
>> + msg = EmailMessage()
>> + msg.set_content(message.pop('Body'))
>> +
>> + for key in message.keys():
>> + msg[key] = message[key]
>> +
>> + msg['From'] = self.config['sender']
>> + msg['Reply-To'] = 'no-reply@dpdk.org'
>> +
>> + self.server.send_message(msg)
>> +
>> + time.sleep(1)
>
> Why this sleep is needed?
Don't hammer the mail server :-)
>
>> +
>> + def __del__(self):
>> + self.server.quit()
>> +
>> +class OutputTerminal(): # pylint: disable=too-few-public-methods
>> + '''Format the output for the terminal'''
>> + def __init__(self, config):
>> + self.config = config
>> +
>> + def message(self,message):
>> + '''Print email to terminal'''
>> +
>> + terminal = 'To:' + ', '.join(message['To']) + '\n'
>> + if 'sender' in self.config.keys():
>> + terminal += 'From:' + self.config['sender'] + '\n'
>> +
>> + terminal += 'Reply-To:' + 'no-reply@dpdk.org' + '\n'
>> +
>> + if 'CC' in message:
>> + terminal += 'CC:' + ', '.join(message['CC']) + '\n'
>> +
>> + terminal += 'Subject:' + message['Subject'] + '\n'
>> + terminal += 'Body:' + message['Body'] + '\n'
>> +
>> + print(terminal)
>> + print('-' * 80)
>> +
>> +def parse_config(args):
>> + '''put the command line args in the right places'''
>> + config = {}
>> + error_msg = None
>> +
>> + outputs = {
>> + None : OutputTerminal,
>> + 'terminal' : OutputTerminal,
>> + 'email' : OutputEmail
>> + }
>> +
>> + if args.format_output == 'email':
>> + if args.smtp_server is None:
>> + error_msg = 'SMTP server'
>> + else:
>> + config['smtp_server'] = args.smtp_server
>> +
>> + if args.sender is None:
>> + error_msg = 'sender'
>> + else:
>> + config['sender'] = args.sender
>> +
>> + if args.password is None:
>> + error_msg = 'password'
>> + else:
>> + config['password'] = args.password
>> +
>> + if args.cc is not None:
>> + config['CC'] = args.cc
>> +
>> + if error_msg is not None:
>> + print('Please specify a {} for email output'.format(error_msg))
>> + return None
>> +
>> + config['output'] = outputs[args.format_output]
>> + return config
>> +
>> +def main():
>> + '''Main entry point'''
>> + parser = argparse.ArgumentParser(description=DESCRIPTION.format(s=__file__), \
>> + formatter_class=RawTextHelpFormatter)
>> + parser.add_argument('--format-output', choices=['terminal','email'], \
>> + default='terminal')
>> + parser.add_argument('--smtp-server')
>> + parser.add_argument('--password')
>> + parser.add_argument('--sender')
>> + parser.add_argument('--cc')
>> +
>> + args = parser.parse_args()
>> + config = parse_config(args)
>> + if config is None:
>> + return
>> +
>> + symbols = {}
>> + lastlib = library = ''
>> +
>> + output = config['output'](config)
>> +
>> + for line in sys.stdin:
>> + line = line.rstrip('\n')
>> +
>> + if line.find('mapfile') >= 0:
>> + continue
>> + library, symbol, name, email = line.split(',')
>> +
>> + if library != lastlib:
>> + message = get_message(lastlib, symbols, config)
>> + output.message(message)
>> + symbols = {}
>> +
>> + lastlib = library
>> + symbols[symbol] = {'name' : name, 'email' : email}
>> +
>> + #print the last library
>> + message = get_message(lastlib, symbols, config)
>> + output.message(message)
>> +
>> +if __name__ == '__main__':
>> + main()
>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v1] ethdev: clarify flow action PORT ID semantics
@ 2021-09-03 7:46 3% ` Andrew Rybchenko
0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-09-03 7:46 UTC (permalink / raw)
To: Thomas Monjalon, Ferruh Yigit, Ori Kam
Cc: dev, Eli Britstein, Ilya Maximets, Ajit Khaparde, Ivan Malov,
John Daley, Hyong Youb Kim, Jerin Jacob, Nithin Dabilpuram,
Kiran Kumar K, Somnath Kotur
From: Ivan Malov <ivan.malov@oktetlabs.ru>
Definition of action PORT_ID is ambiguous.
Documentation says "Directs matching traffic to a given DPDK port ID."
It suggests that an application will receive corresponding traffic on
the port using ethdev Rx burst API. Some network PMDs implement PORT_ID
action this way.
However, OvS+DPDK uses PORT_ID action a way which is natural it to
bypass OvS and redirect corresponding traffic to wire if the DPDK port
is a physical function or to a virtual function if the DPDK port is
a VF representor. Anyway corresponding packets will not be received
using ethdev Rx burst API by the OvS+DPDK. Other network PMDs implement
the PORT_ID action following the semantics.
The latter semantics may be explained to match the above definition
taking port representor definition into account which says that port
representor is a ghost of the real port and redirecting a traffic to
it means redirecting the traffic to the corresponding real port.
However, representors are not the only use case, and solution should
be extended anyway to support both options.
Since these interpretations of the PORT_ID action semantics are basically
opposite and both have reasons behind, it is bad to silently break some
applications changing default meaning. So make it backward incompatible
to ensure that all applications are updated to use it in a right way.
Stick to ingress/egress terminology to specify direction since it is
well defined from DPDK application point of view:
- ingress to be received via the specified port;
- egress as if it would-be transmitted via the specified port.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
The patch is a follow up of the long discussion of the RFC [1].
It is just an attempt to summarize some results of the discussion.
Small quote from the discussion:
On June 3, 2021, 11:05 a.m. UTC, Ilya Maximets wrote:
> On June 3, 2021, 10:33 a.m. UTC, Andrew Rybchenko wrote:
> >
> > A. Add "ingress" bit with "egress" as unset meaning.
> > Yes, that's what is current behaviour assumed and
> > used by OvS and implemented in some PMDs.
> > My problem with it that it is, IMHO, inconsistent
> > default value (as explained above).
> >
> > B. Add "egress" bit with "ingress" as unset meaning.
> > Basically it is what is suggested in the RFC, but
> > the problem of the suggestion is the silent breakage
> > of existing users (let's put it a side if it is
> > correct usage or misuse). It is still the fact.
> >
> > C. Encode above in ethdev port ID MSB.
> > The problem of the solution is that encoding
> > makes sense for representors, but the problem
> > exists for non-representor ports as well.
> > I have no good ideas on terminology in the case
> > if we try to solve it for non-representors.
> >
> > D. Break API and ABI and add enum with unset(default)/
> > ingress/egress members to enforce application to
> > specify direction.
> >
> > It is unclear what we'll do in the case of A, B and D
> > if we encode representor in port ID MSB in any case.
>
> My opinion:
>
> - Option D is the best choice for rte_flow. No defaults, users forced
> to explicitly choose the direction in HW-independent way.
>
> - I agree that option C somewhat conflicts with the 'ingress/egress'
> flag idea and it is more hardware-specific. Therefore if option C
> is going to be implemented it should be implemented in concept of
> option A, i.e. 'egress' is default option if port ID MSB is not set.
If the solution is accepted, testpmd must be updated to require action
direction specification and drivers which implement the action must be
updated to handle direction properly and reject unspecified value with
appropriate error message.
If the patch does not address all concerns and there is still
significant disagreement on the topic, it would be good to schedule
call to discuss it.
[1] https://patches.dpdk.org/project/dpdk/patch/20210601111420.5549-1-ivan.malov@oktetlabs.ru/
doc/guides/prog_guide/rte_flow.rst | 7 ++++++-
doc/guides/rel_notes/deprecation.rst | 6 ------
doc/guides/rel_notes/release_21_11.rst | 4 ++++
lib/ethdev/rte_flow.h | 25 ++++++++++++++++++++++++-
4 files changed, 34 insertions(+), 8 deletions(-)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2b42d5ec8c..89404b38af 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1919,7 +1919,10 @@ See `Item: PHY_PORT`_.
Action: ``PORT_ID``
^^^^^^^^^^^^^^^^^^^
-Directs matching traffic to a given DPDK port ID.
+
+Directs matching traffic to the specified DPDK port (ingress) or to
+the would-be destination as if the application itself sent this traffic
+from the said DPDK port (egress).
See `Item: PORT_ID`_.
@@ -1934,6 +1937,8 @@ See `Item: PORT_ID`_.
+--------------+---------------------------------------+
| ``id`` | DPDK port ID |
+--------------+---------------------------------------+
+ | ``dir`` | egress or ingress |
+ +--------------+---------------------------------------+
Action: ``METER``
^^^^^^^^^^^^^^^^^
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b..4ec6927c86 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -128,12 +128,6 @@ Deprecation Notices
is deprecated and will be removed in DPDK 21.11. Shared counters should
be managed using shared actions API (``rte_flow_shared_action_create`` etc).
-* ethdev: Definition of the flow API action ``RTE_FLOW_ACTION_TYPE_PORT_ID``
- is ambiguous and needs clarification.
- Structure ``rte_flow_action_port_id`` will be extended to specify
- traffic direction to the represented entity or ethdev port itself
- in DPDK 21.11.
-
* ethdev: Flow API documentation is unclear if ethdev port used to create
a flow rule adds any implicit match criteria in the case of transfer rules.
The semantics will be clarified in DPDK 21.11 and it will require fixes in
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d707a554ef..b7aa175a32 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -84,6 +84,10 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* ethdev: ``struct rte_flow_action_port_id`` is extended with the direction
+ field with unspecified default, so all ``PORT_ID`` flow API action users
+ must be updated to make correct choice.
+
ABI Changes
-----------
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 70f455d47d..3b83ed7d3a 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2632,10 +2632,32 @@ struct rte_flow_action_phy_port {
uint32_t index; /**< Physical port index. */
};
+/** Traffic direction from application point of view. */
+enum rte_flow_direction {
+ /**
+ * Invalid value which should not be used and must be
+ * rejected by drivers.
+ */
+ RTE_FLOW_DIRECTION_UNSPECIFIED = 0,
+ /** As if the traffic was sent by the application. */
+ RTE_FLOW_EGRESS,
+ /** To be received by the application. */
+ RTE_FLOW_INGRESS,
+};
+
/**
* RTE_FLOW_ACTION_TYPE_PORT_ID
*
- * Directs matching traffic to a given DPDK port ID.
+ * Directs matching traffic to an ethdev with the given DPDK port ID or
+ * to the upstream port (the peer side of the wire) corresponding to it.
+ *
+ * It's assumed that it's the PMD (typically, its instance at the admin
+ * PF) which controls the binding between a (representor) ethdev and an
+ * upstream port. Typical bindings: VF rep. <=> VF, PF <=> network port.
+ * If the PMD instance is unaware of the binding between the ethdev and
+ * its upstream port (or can't control it), it should reject the action
+ * with the egress direction specified and log an appropriate error
+ * message.
*
* @see RTE_FLOW_ITEM_TYPE_PORT_ID
*/
@@ -2643,6 +2665,7 @@ struct rte_flow_action_port_id {
uint32_t original:1; /**< Use original DPDK port ID if possible. */
uint32_t reserved:31; /**< Reserved, must be zero. */
uint32_t id; /**< DPDK port ID. */
+ enum rte_flow_direction dir; /**< Direction to route traffic to. */
};
/**
--
2.30.2
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 1/5] net/enetfec: introduce NXP ENETFEC driver
@ 2021-09-03 7:14 3% ` David Marchand
2021-09-10 3:47 0% ` [dpdk-dev] [EXT] " Apeksha Gupta
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-09-03 7:14 UTC (permalink / raw)
To: Apeksha Gupta
Cc: Andrew Rybchenko, Yigit, Ferruh, dev, Hemant Agrawal, Sachin Saxena
On Thu, Sep 2, 2021 at 8:01 PM Apeksha Gupta <apeksha.gupta@nxp.com> wrote:
>
> ENETFEC (Fast Ethernet Controller) is a network poll mode driver
> for NXP SoC i.MX 8M Mini.
>
> This patch adds skeleton for enetfec driver with probe function.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> ---
> doc/guides/nics/enetfec.rst | 121 ++++++++++++++++++++
> doc/guides/nics/features/enetfec.ini | 8 ++
> doc/guides/nics/index.rst | 1 +
> drivers/net/enetfec/enet_ethdev.c | 95 ++++++++++++++++
> drivers/net/enetfec/enet_ethdev.h | 160 +++++++++++++++++++++++++++
> drivers/net/enetfec/enet_pmd_logs.h | 31 ++++++
> drivers/net/enetfec/meson.build | 15 +++
> drivers/net/enetfec/version.map | 3 +
> drivers/net/meson.build | 1 +
> 9 files changed, 435 insertions(+)
> create mode 100644 doc/guides/nics/enetfec.rst
> create mode 100644 doc/guides/nics/features/enetfec.ini
> create mode 100644 drivers/net/enetfec/enet_ethdev.c
> create mode 100644 drivers/net/enetfec/enet_ethdev.h
> create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
> create mode 100644 drivers/net/enetfec/meson.build
> create mode 100644 drivers/net/enetfec/version.map
Please update MAINTAINERS and release notes in this first patch.
>
> diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
> new file mode 100644
> index 0000000000..f151bb26c4
> --- /dev/null
> +++ b/doc/guides/nics/enetfec.rst
> @@ -0,0 +1,121 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright 2021 NXP
> +
> +ENETFEC Poll Mode Driver
> +========================
> +
> +The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver
> +support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
> +
> +More information can be found at NXP Official Website
> +<https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8m-mini-arm-cortex-a53-cortex-m4-audio-voice-video:i.MX8MMINI>
> +
> +ENETFEC
> +-------
> +
> +This section provides an overview of the NXP ENETFEC and how it is
> +integrated into the DPDK.
> +
> +Contents summary
> +
> +- ENETFEC overview
> +- ENETFEC features
> +- Supported ENETFEC SoCs
> +- Prerequisites
> +- Driver compilation and testing
> +- Limitations
> +
> +ENETFEC Overview
> +~~~~~~~~~~~~~~~~
> +The i.MX 8M Mini Media Applications Processor is built to achieve both high
> +performance and low power consumption. ENETFEC is a hardware programmable
> +packet forwarding engine to provide high performance Ethernet interface.
> +The diagram below shows a system level overview of ENETFEC:
> +
> + ====================================================+===============
> + US +-----------------------------------------+ | Kernel Space
> + | | |
> + | ENETFEC Driver | |
> + +-----------------------------------------+ |
> + ^ | |
> + ENETFEC RXQ | | TXQ |
> + PMD | | |
> + | v | +----------+
Misalignment of the right part.
> + +-------------+ | | fec-uio |
What is fec-uio?
I can't find it in upstream linux kernel.
> + | net_enetfec | | +----------+
> + +-------------+ |
> + ^ | |
> + TXQ | | RXQ |
> + | | |
> + | v |
> + ===================================================+===============
> + +----------------------------------------+
> + | | HW
> + | i.MX 8M MINI EVK |
> + | +-----+ |
> + | | MAC | |
> + +---------------+-----+------------------+
> + | PHY |
> + +-----+
Misalignment.
> +
> +ENETFEC Ethernet driver is traditional DPDK PMD driver running in the userspace.
> +The MAC and PHY are the hardware blocks. 'fec-uio' is the UIO driver, ENETFEC PMD
> +uses UIO interface to interact with kernel for PHY initialisation and for mapping
> +the allocated memory of register & BD in kernel with DPDK which gives access to
> +non-cacheable memory for BD. net_enetfec is logical Ethernet interface, created by
> +ENETFEC driver.
> +
> +- ENETFEC driver registers the device in virtual device driver.
> +- RTE framework scans and will invoke the probe function of ENETFEC driver.
> +- The probe function will set the basic device registers and also setups BD rings.
> +- On packet Rx the respective BD Ring status bit is set which is then used for
> + packet processing.
> +- Then Tx is done first followed by Rx via logical interfaces.
> +
> +ENETFEC Features
> +~~~~~~~~~~~~~~~~~
> +
> +- ARMv8
> +
> +Supported ENETFEC SoCs
> +~~~~~~~~~~~~~~~~~~~~~~
> +
> +- i.MX 8M Mini
> +
> +Prerequisites
> +~~~~~~~~~~~~~
> +
> +There are three main pre-requisites for executing ENETFEC PMD on a i.MX 8M Mini
> +compatible board:
> +
> +1. **ARM 64 Tool Chain**
> +
> + For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu.tar.xz>`_.
> +
> +2. **Linux Kernel**
> +
> + It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
> +
> +3. **Rootfile system**
> +
> + Any *aarch64* supporting filesystem can be used. For example,
> + Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be obtained
> + from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
> +
> +4. The Ethernet device will be registered as virtual device, so ENETFEC has dependency on
> + **rte_bus_vdev** library and it is mandatory to use `--vdev` with value `net_enetfec` to
> + run DPDK application.
> +
> +Driver compilation and testing
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Follow instructions available in the document
> +:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> +to launch **testpmd**
dpdk-testpmd.
> +
> +Limitations
> +~~~~~~~~~~~
> +
> +- Multi queue is not supported.
> +- Link status is down always.
> +- Single Ethernet interface.
> diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
> new file mode 100644
> index 0000000000..5700697981
> --- /dev/null
> +++ b/doc/guides/nics/features/enetfec.ini
> @@ -0,0 +1,8 @@
> +;
> +; Supported features of the 'enetfec' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +ARMv8 = Y
> +Usage doc = Y
> diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
> index 784d5d39f6..777fdab4a0 100644
> --- a/doc/guides/nics/index.rst
> +++ b/doc/guides/nics/index.rst
> @@ -26,6 +26,7 @@ Network Interface Controller Drivers
> e1000em
> ena
> enetc
> + enetfec
> enic
> fm10k
> hinic
> diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
> new file mode 100644
> index 0000000000..88774788cf
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_ethdev.c
> @@ -0,0 +1,95 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <stdio.h>
> +#include <fcntl.h>
> +#include <stdlib.h>
> +#include <unistd.h>
> +#include <errno.h>
> +#include <rte_kvargs.h>
> +#include <ethdev_vdev.h>
> +#include <rte_bus_vdev.h>
> +#include <rte_dev.h>
> +#include <rte_ether.h>
> +#include "enet_ethdev.h"
> +#include "enet_pmd_logs.h"
> +
> +#define ENETFEC_NAME_PMD net_enetfec
> +#define ENETFEC_VDEV_GEM_ID_ARG "intf"
> +#define ENETFEC_CDEV_INVALID_FD -1
> +
> +int enetfec_logtype_pmd;
If using RTE_LOG_REGISTER* macros (see below), you don't need this definition.
> +
> +static int
> +enetfec_eth_init(struct rte_eth_dev *dev)
> +{
> + rte_eth_dev_probing_finish(dev);
> + return 0;
> +}
> +
> +static int
> +pmd_enetfec_probe(struct rte_vdev_device *vdev)
> +{
> + struct rte_eth_dev *dev = NULL;
> + struct enetfec_private *fep;
> + const char *name;
> + int rc;
> +
> + name = rte_vdev_device_name(vdev);
> + if (name == NULL)
> + return -EINVAL;
> + ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
> +
> + dev = rte_eth_vdev_allocate(vdev, sizeof(*fep));
> + if (dev == NULL)
> + return -ENOMEM;
> +
> + /* setup board info structure */
> + fep = dev->data->dev_private;
> + fep->dev = dev;
> + rc = enetfec_eth_init(dev);
> + if (rc)
> + goto failed_init;
> +
> + return 0;
> +
> +failed_init:
> + ENETFEC_PMD_ERR("Failed to init");
> + return rc;
> +}
> +
> +static int
> +pmd_enetfec_remove(struct rte_vdev_device *vdev)
> +{
> + struct rte_eth_dev *eth_dev = NULL;
> + int ret;
> +
> + /* find the ethdev entry */
> + eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
> + if (eth_dev == NULL)
> + return -ENODEV;
> +
> + ret = rte_eth_dev_release_port(eth_dev);
> + if (ret != 0)
> + return -EINVAL;
> +
> + ENETFEC_PMD_INFO("Closing sw device");
> + return 0;
> +}
> +
> +static struct rte_vdev_driver pmd_enetfec_drv = {
> + .probe = pmd_enetfec_probe,
> + .remove = pmd_enetfec_remove,
> +};
> +
> +RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv);
> +RTE_PMD_REGISTER_PARAM_STRING(ENETFEC_NAME_PMD, ENETFEC_VDEV_GEM_ID_ARG "=<int>");
> +
> +RTE_INIT(enetfec_pmd_init_log)
> +{
> + int ret;
> + ret = rte_log_register_type_and_pick_level(ENETFEC_LOGTYPE_PREFIX "driver",
> + RTE_LOG_NOTICE);
> + enetfec_logtype_pmd = (ret < 0) ? RTE_LOGTYPE_PMD : ret;
> +}
Please use RTE_LOG_REGISTER_DEFAULT.
> diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
> new file mode 100644
> index 0000000000..8c61176fb5
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_ethdev.h
> @@ -0,0 +1,160 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2020-2021 NXP
> + */
> +
> +#ifndef __ENETFEC_ETHDEV_H__
> +#define __ENETFEC_ETHDEV_H__
> +
> +#include <compat.h>
> +#include <rte_ethdev.h>
> +
> +/* Common log type name prefix */
> +#define ENETFEC_LOGTYPE_PREFIX "pmd.net.enetfec."
> +
> +/*
> + * ENETFEC with AVB IP can support maximum 3 rx and tx queues.
> + */
> +#define ENETFEC_MAX_Q 3
> +
> +#define BD_LEN 49152
> +#define ENETFEC_TX_FR_SIZE 2048
> +#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */
> +#define MAX_RX_BD_RING_SIZE 512
> +
> +/* full duplex or half duplex */
> +#define HALF_DUPLEX 0x00
> +#define FULL_DUPLEX 0x01
> +#define UNKNOWN_DUPLEX 0xff
> +
> +#define PKT_MAX_BUF_SIZE 1984
> +#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
> +#define ETH_ALEN RTE_ETHER_ADDR_LEN
> +#define ETH_HLEN RTE_ETHER_HDR_LEN
> +#define VLAN_HLEN 4
> +
> +struct bufdesc {
> + uint16_t bd_datlen; /* buffer data length */
> + uint16_t bd_sc; /* buffer control & status */
> + uint32_t bd_bufaddr; /* buffer address */
> +};
> +
> +struct bufdesc_ex {
> + struct bufdesc desc;
> + uint32_t bd_esc;
> + uint32_t bd_prot;
> + uint32_t bd_bdu;
> + uint32_t ts;
> + uint16_t res0[4];
> +};
> +
> +struct bufdesc_prop {
> + int que_id;
> + /* Addresses of Tx and Rx buffers */
> + struct bufdesc *base;
> + struct bufdesc *last;
> + struct bufdesc *cur;
> + void __iomem *active_reg_desc;
> + uint64_t descr_baseaddr_p;
> + unsigned short ring_size;
> + unsigned char d_size;
> + unsigned char d_size_log2;
> +};
> +
> +struct enetfec_priv_tx_q {
> + struct bufdesc_prop bd;
> + struct rte_mbuf *tx_mbuf[MAX_TX_BD_RING_SIZE];
> + struct bufdesc *dirty_tx;
> + struct rte_mempool *pool;
> + struct enetfec_private *fep;
> +};
> +
> +struct enetfec_priv_rx_q {
> + struct bufdesc_prop bd;
> + struct rte_mbuf *rx_mbuf[MAX_RX_BD_RING_SIZE];
> + struct rte_mempool *pool;
> + struct enetfec_private *fep;
> +};
> +
> +/* Buffer descriptors of FEC are used to track the ring buffers. Buffer
> + * descriptor base is x_bd_base. Currently available buffer are x_cur
> + * and x_cur. where x is rx or tx. Current buffer is tracked by dirty_tx
> + * that is sent by the controller.
> + * The tx_cur and dirty_tx are same in completely full and empty
> + * conditions. Actual condition is determined by empty & ready bits.
> + */
> +struct enetfec_private {
> + struct rte_eth_dev *dev;
> + struct rte_eth_stats stats;
> + struct rte_mempool *pool;
> + uint16_t max_rx_queues;
> + uint16_t max_tx_queues;
> + unsigned int total_tx_ring_size;
> + unsigned int total_rx_ring_size;
> + bool bufdesc_ex;
> + unsigned int tx_align;
> + unsigned int rx_align;
> + int full_duplex;
> + unsigned int phy_speed;
> + u_int32_t quirks;
> + int flag_csum;
> + int flag_pause;
> + int flag_wol;
> + bool rgmii_txc_delay;
> + bool rgmii_rxc_delay;
> + int link;
> + void *hw_baseaddr_v;
> + uint64_t hw_baseaddr_p;
> + void *bd_addr_v;
> + uint64_t bd_addr_p;
> + uint64_t bd_addr_p_r[ENETFEC_MAX_Q];
> + uint64_t bd_addr_p_t[ENETFEC_MAX_Q];
> + void *dma_baseaddr_r[ENETFEC_MAX_Q];
> + void *dma_baseaddr_t[ENETFEC_MAX_Q];
> + uint64_t cbus_size;
> + unsigned int reg_size;
> + unsigned int bd_size;
> + int hw_ts_rx_en;
> + int hw_ts_tx_en;
> + struct enetfec_priv_rx_q *rx_queues[ENETFEC_MAX_Q];
> + struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q];
> +};
> +
> +#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
> +#define readl(p) rte_read32(p)
> +
> +static inline struct
> +bufdesc *enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
> +{
> + return (bdp >= bd->last) ? bd->base
> + : (struct bufdesc *)(((void *)bdp) + bd->d_size);
> +}
> +
> +static inline struct
> +bufdesc *enet_get_prevdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
> +{
> + return (bdp <= bd->base) ? bd->last
> + : (struct bufdesc *)(((void *)bdp) - bd->d_size);
> +}
> +
> +static inline int
> +enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
> +{
> + return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
> +}
> +
> +static inline int
> +fls64(unsigned long word)
> +{
> + return (64 - __builtin_clzl(word)) - 1;
> +}
> +
> +uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
> + uint16_t nb_pkts);
> +uint16_t enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> + uint16_t nb_pkts);
> +struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp,
> + struct bufdesc_prop *bd);
> +int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp,
> + struct rte_mbuf *mbuf);
> +
> +#endif /*__ENETFEC_ETHDEV_H__*/
> diff --git a/drivers/net/enetfec/enet_pmd_logs.h b/drivers/net/enetfec/enet_pmd_logs.h
> new file mode 100644
> index 0000000000..e7b3964a0e
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_pmd_logs.h
> @@ -0,0 +1,31 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2020-2021 NXP
> + */
> +
> +#ifndef _ENETFEC_LOGS_H_
> +#define _ENETFEC_LOGS_H_
> +
> +extern int enetfec_logtype_pmd;
> +
> +/* PMD related logs */
> +#define ENETFEC_PMD_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "\nfec_net: %s()" \
> + fmt "\n", __func__, ##args)
> +
> +#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>")
> +
> +#define ENETFEC_PMD_DEBUG(fmt, args...) \
> + ENETFEC_PMD_LOG(DEBUG, fmt, ## args)
> +#define ENETFEC_PMD_ERR(fmt, args...) \
> + ENETFEC_PMD_LOG(ERR, fmt, ## args)
> +#define ENETFEC_PMD_INFO(fmt, args...) \
> + ENETFEC_PMD_LOG(INFO, fmt, ## args)
> +
> +#define ENETFEC_PMD_WARN(fmt, args...) \
> + ENETFEC_PMD_LOG(WARNING, fmt, ## args)
> +
> +/* DP Logs, toggled out at compile time if level lower than current level */
> +#define ENETFEC_DP_LOG(level, fmt, args...) \
> + RTE_LOG_DP(level, PMD, fmt, ## args)
> +
> +#endif /* _ENETFEC_LOGS_H_ */
> diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
> new file mode 100644
> index 0000000000..252bf83309
> --- /dev/null
> +++ b/drivers/net/enetfec/meson.build
> @@ -0,0 +1,15 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright 2021 NXP
> +
> +if not is_linux
> + build = false
> + reason = 'only supported on linux'
The doc specifies that only armv8 is supported.
> +endif
> +
> +deps += ['common_dpaax']
> +
> +sources = files('enet_ethdev.c')
> +
> +if cc.has_argument('-Wno-pointer-arith')
> + cflags += '-Wno-pointer-arith'
> +endif
Can't this be fixed in the code rather than ignored?
> diff --git a/drivers/net/enetfec/version.map b/drivers/net/enetfec/version.map
> new file mode 100644
> index 0000000000..170c04fe53
> --- /dev/null
> +++ b/drivers/net/enetfec/version.map
> @@ -0,0 +1,3 @@
> +DPDK_20.0 {
Wrong ABI version.
> + local: *;
> +};
> diff --git a/drivers/net/meson.build b/drivers/net/meson.build
> index bcf488f203..92f433d5e8 100644
> --- a/drivers/net/meson.build
> +++ b/drivers/net/meson.build
> @@ -19,6 +19,7 @@ drivers = [
> 'e1000',
> 'ena',
> 'enetc',
> + 'enetfec',
Indent.
> 'enic',
> 'failsafe',
> 'fm10k',
> --
> 2.17.1
>
--
David Marchand
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH 4/5] doc: changes for new pcapng and dumpcap
2021-09-03 0:47 1% ` [dpdk-dev] [PATCH 2/5] pdump: support pcapng and filtering Stephen Hemminger
@ 2021-09-03 0:47 2% ` Stephen Hemminger
` (5 subsequent siblings)
7 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-03 0:47 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Describe the new packet capture library and utilities
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/howto/packet_capture_framework.rst | 67 ++++++++--------
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/pcapng_lib.rst | 24 ++++++
doc/guides/prog_guide/pdump_lib.rst | 28 +++++--
doc/guides/rel_notes/release_21_11.rst | 10 +++
doc/guides/tools/dumpcap.rst | 80 +++++++++++++++++++
doc/guides/tools/index.rst | 1 +
9 files changed, 174 insertions(+), 39 deletions(-)
create mode 100644 doc/guides/prog_guide/pcapng_lib.rst
create mode 100644 doc/guides/tools/dumpcap.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 1992107a0356..ee07394d1c78 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -223,3 +223,4 @@ The public API headers are grouped by topics:
[experimental APIs] (@ref rte_compat.h),
[ABI versioning] (@ref rte_function_versioning.h),
[version] (@ref rte_version.h)
+ [pcapng] (@ref rte_pcapng.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 325a0195c6ab..aba17799a9a1 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -58,6 +58,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/metrics \
@TOPDIR@/lib/node \
@TOPDIR@/lib/net \
+ @TOPDIR@/lib/pcapng \
@TOPDIR@/lib/pci \
@TOPDIR@/lib/pdump \
@TOPDIR@/lib/pipeline \
diff --git a/doc/guides/howto/packet_capture_framework.rst b/doc/guides/howto/packet_capture_framework.rst
index c31bac52340e..78baa609a021 100644
--- a/doc/guides/howto/packet_capture_framework.rst
+++ b/doc/guides/howto/packet_capture_framework.rst
@@ -1,18 +1,19 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright(c) 2017 Intel Corporation.
-DPDK pdump Library and pdump Tool
-=================================
+DPDK packet capture libraries and tools
+=======================================
This document describes how the Data Plane Development Kit (DPDK) Packet
Capture Framework is used for capturing packets on DPDK ports. It is intended
for users of DPDK who want to know more about the Packet Capture feature and
for those who want to monitor traffic on DPDK-controlled devices.
-The DPDK packet capture framework was introduced in DPDK v16.07. The DPDK
-packet capture framework consists of the DPDK pdump library and DPDK pdump
-tool.
-
+The DPDK packet capture framework was introduced in DPDK v16.07 and
+enhanced in 21.1. The DPDK packet capture framework consists of the
+libraries for collecting packets ``librte_pdump`` and writing packets
+to a file ``librte_pcapng``. There are two sample applications:
+``dpdk-dumpcap`` and older ``dpdk-pdump``.
Introduction
------------
@@ -22,43 +23,46 @@ allow users to initialize the packet capture framework and to enable or
disable packet capture. The library works on a multi process communication model and its
usage is recommended for debugging purposes.
-The :ref:`dpdk-pdump <pdump_tool>` tool is developed based on the
-``librte_pdump`` library. It runs as a DPDK secondary process and is capable
-of enabling or disabling packet capture on DPDK ports. The ``dpdk-pdump`` tool
-provides command-line options with which users can request enabling or
-disabling of the packet capture on DPDK ports.
+The :ref:`librte_pcapng <pcapng_library>` library provides the APIs to format
+packets and write them to a file in Pcapng format.
+
+
+The :ref:`dpdk-dumpcap <dumpcap_tool>` is a tool that captures packets in
+like Wireshark dumpcap does for Linux. It runs as a DPDK secondary process and
+captures packets from one or more interfaces and writes them to a file
+in Pcapng format. The ``dpdk-dumpcap`` tool is designed to take
+most of the same options as the Wireshark ``dumpcap`` command.
-The application which initializes the packet capture framework will be a primary process
-and the application that enables or disables the packet capture will
-be a secondary process. The primary process sends the Rx and Tx packets from the DPDK ports
-to the secondary process.
+Without any options it will use the packet capture framework to
+capture traffic from the first available DPDK port.
In DPDK the ``testpmd`` application can be used to initialize the packet
-capture framework and acts as a server, and the ``dpdk-pdump`` tool acts as a
+capture framework and acts as a server, and the ``dpdk-dumpcap`` tool acts as a
client. To view Rx or Tx packets of ``testpmd``, the application should be
-launched first, and then the ``dpdk-pdump`` tool. Packets from ``testpmd``
-will be sent to the tool, which then sends them on to the Pcap PMD device and
-that device writes them to the Pcap file or to an external interface depending
-on the command-line option used.
+launched first, and then the ``dpdk-dumpcap`` tool. Packets from ``testpmd``
+will be sent to the tool, and then to the Pcapng file.
Some things to note:
-* The ``dpdk-pdump`` tool can only be used in conjunction with a primary
+* All tools using ``librte_pdump`` can only be used in conjunction with a primary
application which has the packet capture framework initialized already. In
dpdk, only ``testpmd`` is modified to initialize packet capture framework,
- other applications remain untouched. So, if the ``dpdk-pdump`` tool has to
+ other applications remain untouched. So, if the ``dpdk-dumpcap`` tool has to
be used with any application other than the testpmd, the user needs to
explicitly modify that application to call the packet capture framework
initialization code. Refer to the ``app/test-pmd/testpmd.c`` code and look
for ``pdump`` keyword to see how this is done.
-* The ``dpdk-pdump`` tool depends on the libpcap based PMD.
+* The ``dpdk-pdump`` tool is an older tool created as demonstration of ``librte_pdump``
+ library. The ``dpdk-pdump`` tool provides more limited functionality and
+ and depends on the Pcap PMD. It is retained only for compatibility reasons;
+ users should use ``dpdk-dumpcap`` instead.
Test Environment
----------------
-The overview of using the Packet Capture Framework and the ``dpdk-pdump`` tool
+The overview of using the Packet Capture Framework and the ``dpdk-dumpcap`` utility
for packet capturing on the DPDK port in
:numref:`figure_packet_capture_framework`.
@@ -66,13 +70,13 @@ for packet capturing on the DPDK port in
.. figure:: img/packet_capture_framework.*
- Packet capturing on a DPDK port using the dpdk-pdump tool.
+ Packet capturing on a DPDK port using the dpdk-dumpcap utility.
Running the Application
-----------------------
-The following steps demonstrate how to run the ``dpdk-pdump`` tool to capture
+The following steps demonstrate how to run the ``dpdk-dumpcap`` tool to capture
Rx side packets on dpdk_port0 in :numref:`figure_packet_capture_framework` and
inspect them using ``tcpdump``.
@@ -80,16 +84,15 @@ inspect them using ``tcpdump``.
sudo <build_dir>/app/dpdk-testpmd -c 0xf0 -n 4 -- -i --port-topology=chained
-#. Launch the pdump tool as follows::
+#. Launch the dpdk-dump as follows::
- sudo <build_dir>/app/dpdk-pdump -- \
- --pdump 'port=0,queue=*,rx-dev=/tmp/capture.pcap'
+ sudo <build_dir>/app/dpdk-dumpcap -w /tmp/capture.pcapng
#. Send traffic to dpdk_port0 from traffic generator.
- Inspect packets captured in the file capture.pcap using a tool
- that can interpret Pcap files, for example tcpdump::
+ Inspect packets captured in the file capture.pcap using a tool such as
+ tcpdump or tshark that can interpret Pcapng files::
- $tcpdump -nr /tmp/capture.pcap
+ $ tcpdump -nr /tmp/capture.pcapng
reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)
11:11:36.891404 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
11:11:36.891442 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 2dce507f46a3..b440c77c2ba1 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -43,6 +43,7 @@ Programmer's Guide
ip_fragment_reassembly_lib
generic_receive_offload_lib
generic_segmentation_offload_lib
+ pcapng_lib
pdump_lib
multi_proc_support
kernel_nic_interface
diff --git a/doc/guides/prog_guide/pcapng_lib.rst b/doc/guides/prog_guide/pcapng_lib.rst
new file mode 100644
index 000000000000..36379b530a57
--- /dev/null
+++ b/doc/guides/prog_guide/pcapng_lib.rst
@@ -0,0 +1,24 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2016 Intel Corporation.
+
+.. _pcapng_library:
+
+Packet Capture File Writer
+==========================
+
+Pcapng is a library for creating files in Pcapng file format.
+The Pcapng file format is the default capture file format for modern
+network capture processing tools. It can be read by wireshark and tcpdump.
+
+Usage
+-----
+
+Before the library can be used the function ``rte_pcapng_init``
+should be called once to initialize timestamp computation.
+
+
+References
+----------
+* Draft RFC https://www.ietf.org/id/draft-tuexen-opsawg-pcapng-03.html
+
+* Project repository https://github.com/pcapng/pcapng/
diff --git a/doc/guides/prog_guide/pdump_lib.rst b/doc/guides/prog_guide/pdump_lib.rst
index 62c0b015b2fe..9af91415e5ea 100644
--- a/doc/guides/prog_guide/pdump_lib.rst
+++ b/doc/guides/prog_guide/pdump_lib.rst
@@ -3,10 +3,10 @@
.. _pdump_library:
-The librte_pdump Library
-========================
+The Packet Capture Library
+==========================
-The ``librte_pdump`` library provides a framework for packet capturing in DPDK.
+The DPDK ``pdump`` library provides a framework for packet capturing in DPDK.
The library does the complete copy of the Rx and Tx mbufs to a new mempool and
hence it slows down the performance of the applications, so it is recommended
to use this library for debugging purposes.
@@ -23,11 +23,19 @@ or disable the packet capture, and to uninitialize it.
* ``rte_pdump_enable()``:
This API enables the packet capture on a given port and queue.
- Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf()``
+ This API enables the packet capture on a given port and queue.
+ It also allows setting an optional filter using DPDK BPF interpreter and
+ setting the captured packet length.
* ``rte_pdump_enable_by_deviceid()``:
This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
- Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf_by_deviceid()``
+ This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
+ It also allows seating an optional filter using DPDK BPF interpreter and
+ setting the captured packet length.
* ``rte_pdump_disable()``:
This API disables the packet capture on a given port and queue.
@@ -61,6 +69,12 @@ and enables the packet capture by registering the Ethernet RX and TX callbacks f
and queue combinations. Then the primary process will mirror the packets to the new mempool and enqueue them to
the rte_ring that secondary process have passed to these APIs.
+The packet ring supports one of two formats. The default format enqueues copies of the original packets
+into the rte_ring. If the ``RTE_PDUMP_FLAG_PCAPNG`` is set the mbuf data is extended with header and trailer
+to match the format of Pcapng enhanced packet block. The enhanced packet block has meta-data such as the
+timestamp, port and queue the packet was captured on. It is up to the application consuming the
+packets from the ring to select the format desired.
+
The library APIs ``rte_pdump_disable()`` and ``rte_pdump_disable_by_deviceid()`` disables the packet capture.
For the calls to these APIs from secondary process, the library creates the "pdump disable" request and sends
the request to the primary process over the multi process channel. The primary process takes this request and
@@ -74,5 +88,5 @@ function.
Use Case: Packet Capturing
--------------------------
-The DPDK ``app/pdump`` tool is developed based on this library to capture packets in DPDK.
-Users can use this as an example to develop their own packet capturing tools.
+The DPDK ``app/dpdk-dumpcap`` utility uses this library
+to capture packets in DPDK.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 675b5738348b..ee24cbfdb99d 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -62,6 +62,16 @@ New Features
* Added bus-level parsing of the devargs syntax.
* Kept compatibility with the legacy syntax as parsing fallback.
+* **Enhance Packet capture.**
+
+ * New dpdk-dumpcap program that has most of the features of the
+ wireshark dumpcap utility including capture of multiple interfaces,
+ stopping after number of bytes, packets.
+ * New library for writing pcapng packet capture files.
+ * Enhancement to the pdump library to support:
+ * Packet filter with BPF.
+ * Pcapng format with timestamps and meta-data.
+ * Fixes packet capture with stripped VLAN tags.
Removed Items
-------------
diff --git a/doc/guides/tools/dumpcap.rst b/doc/guides/tools/dumpcap.rst
new file mode 100644
index 000000000000..90993bd5be5e
--- /dev/null
+++ b/doc/guides/tools/dumpcap.rst
@@ -0,0 +1,80 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2020 Microsoft Corporation.
+
+.. _dumpcap_tool:
+
+dpdk-dumpcap Application
+========================
+
+The ``dpdk-dumpcap`` tool is a Data Plane Development Kit (DPDK)
+network traffic dump tool. The interface is similar to the dumpcap tool in Wireshark.
+It runs as a secondary DPDK process and lets you capture packets that are
+coming into and out of a DPDK primary process.
+The ``dpdk-dumpcap`` writes files in Pcapng packet format using
+capture file format is pcapng.
+
+Without any options set it will use DPDK to capture traffic from the first
+available DPDK interface and write the received raw packet data, along
+with timestamps into a pcapng file.
+
+If the ``-w`` option is not specified, ``dpdk-dumpcap`` writes to a newly
+create file with a name chosen based on interface name and timestamp.
+If ``-w`` option is specified, then that file is used.
+
+ .. Note::
+ * The ``dpdk-dumpcap`` tool can only be used in conjunction with a primary
+ application which has the packet capture framework initialized already.
+ In dpdk, only the ``testpmd`` is modified to initialize packet capture
+ framework, other applications remain untouched. So, if the ``dpdk-dumpcap``
+ tool has to be used with any application other than the testpmd, user
+ needs to explicitly modify that application to call packet capture
+ framework initialization code. Refer ``app/test-pmd/testpmd.c``
+ code to see how this is done.
+
+ * The ``dpdk-dumpcap`` tool runs as a DPDK secondary process. It exits when
+ the primary application exits.
+
+
+Running the Application
+-----------------------
+
+To list interfaces available for capture use ``--list-interfaces``.
+
+To capture on multiple interfaces at once, use multiple ``-I`` flags.
+
+Example
+-------
+
+.. code-block:: console
+
+ # ./<build_dir>/app/dpdk-dumpcap --list-interfaces
+ 0. 000:00:03.0
+ 1. 000:00:03.1
+
+ # ./<build_dir>/app/dpdk-dumpcap -I 0000:00:03.0 -c 6 -w /tmp/sample.pcapng
+ Packets captured: 6
+ Packets received/dropped on interface '0000:00:03.0' 6/0
+
+
+Limitations
+-----------
+The following options of Wireshark ``dumpcap`` are not yet implemented:
+
+ * ``-f <capture filter>`` -- needs translation from classic BPF to eBPF.
+
+ * ``-b|--ring-buffer`` -- more complex file management.
+
+ * ``-C <byte_limit>`` -- doesn't make sense in DPDK model.
+
+ * ``-t`` -- use a thread per interface
+
+ * Timestamp type.
+
+ * Link data types. Only EN10MB (Ethernet) is supported.
+
+ * Wireless related options: ``-I|--monitor-mode`` and ``-k <freq>``
+
+
+.. Note::
+ * The options to ``dpdk-dumpcap`` are like the Wireshark dumpcap program and
+ are not the same as ``dpdk-pdump`` and other DPDK applications.
diff --git a/doc/guides/tools/index.rst b/doc/guides/tools/index.rst
index 93dde4148e90..b71c12b8f2dd 100644
--- a/doc/guides/tools/index.rst
+++ b/doc/guides/tools/index.rst
@@ -8,6 +8,7 @@ DPDK Tools User Guides
:maxdepth: 2
:numbered:
+ dumpcap
proc_info
pdump
pmdinfo
--
2.30.2
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH 2/5] pdump: support pcapng and filtering
@ 2021-09-03 0:47 1% ` Stephen Hemminger
2021-09-03 0:47 2% ` [dpdk-dev] [PATCH 4/5] doc: changes for new pcapng and dumpcap Stephen Hemminger
` (6 subsequent siblings)
7 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-03 0:47 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
This enhances the DPDK pdump library to support new
pcapng format and filtering via BPF.
The internal client/server protocol is changed to support
two versions: the original pdump basic version and a
new pcapng version.
The internal version number (not part of exposed API or ABI)
is intentionally increased to cause any attempt to try
mismatched primary/secondary process to fail.
Add new API to do allow filtering of captured packets with
DPDK BPF (eBPF) filter program. It keeps statistics
on packets captured, filtered, and missed (because ring was full).
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
lib/meson.build | 4 +-
lib/pdump/meson.build | 2 +-
lib/pdump/rte_pdump.c | 386 +++++++++++++++++++++++++++++-------------
lib/pdump/rte_pdump.h | 117 ++++++++++++-
lib/pdump/version.map | 8 +
5 files changed, 394 insertions(+), 123 deletions(-)
diff --git a/lib/meson.build b/lib/meson.build
index 514be90b09ec..b59bec494275 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -26,6 +26,7 @@ libraries = [
'timer', # eventdev depends on this
'acl',
'bbdev',
+ 'bpf',
'bitratestats',
'cfgfile',
'compressdev',
@@ -43,7 +44,6 @@ libraries = [
'member',
'pcapng',
'power',
- 'pdump',
'rawdev',
'regexdev',
'rib',
@@ -55,10 +55,10 @@ libraries = [
'ipsec', # ipsec lib depends on net, crypto and security
'fib', #fib lib depends on rib
'port', # pkt framework libs which use other libs from above
+ 'pdump', # pdump lib depends on bpf pcapng
'table',
'pipeline',
'flow_classify', # flow_classify lib depends on pkt framework table lib
- 'bpf',
'graph',
'node',
]
diff --git a/lib/pdump/meson.build b/lib/pdump/meson.build
index 3a95eabde6a6..51ceb2afdec5 100644
--- a/lib/pdump/meson.build
+++ b/lib/pdump/meson.build
@@ -3,4 +3,4 @@
sources = files('rte_pdump.c')
headers = files('rte_pdump.h')
-deps += ['ethdev']
+deps += ['ethdev', 'bpf', 'pcapng']
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 382217bc1564..3237c54af69e 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -9,6 +9,7 @@
#include <rte_log.h>
#include <rte_errno.h>
#include <rte_string_fns.h>
+#include <rte_pcapng.h>
#include "rte_pdump.h"
@@ -27,30 +28,26 @@ enum pdump_operation {
ENABLE = 2
};
+/*
+ * Note: version numbers intentionally start at 3
+ * in order to catch any application built with older out
+ * version of DPDK using incompatiable client request format.
+ */
enum pdump_version {
- V1 = 1
+ PDUMP_CLIENT_LEGACY = 3,
+ PDUMP_CLIENT_PCAPNG = 4,
};
struct pdump_request {
uint16_t ver;
uint16_t op;
- uint32_t flags;
- union pdump_data {
- struct enable_v1 {
- char device[RTE_DEV_NAME_MAX_LEN];
- uint16_t queue;
- struct rte_ring *ring;
- struct rte_mempool *mp;
- void *filter;
- } en_v1;
- struct disable_v1 {
- char device[RTE_DEV_NAME_MAX_LEN];
- uint16_t queue;
- struct rte_ring *ring;
- struct rte_mempool *mp;
- void *filter;
- } dis_v1;
- } data;
+ uint16_t flags;
+ uint16_t queue;
+ struct rte_ring *ring;
+ struct rte_mempool *mp;
+ const struct rte_bpf *filter;
+ uint32_t snaplen;
+ char device[RTE_DEV_NAME_MAX_LEN];
};
struct pdump_response {
@@ -63,36 +60,67 @@ static struct pdump_rxtx_cbs {
struct rte_ring *ring;
struct rte_mempool *mp;
const struct rte_eth_rxtx_callback *cb;
- void *filter;
+ const struct rte_bpf *filter;
+ enum pdump_version ver;
+ uint32_t snaplen;
+ struct rte_pdump_stats stats;
} rx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
tx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
-static inline void
-pdump_copy(struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
+static void
+pdump_copy(uint16_t port_id, uint16_t queue,
+ enum rte_pcapng_direction direction,
+ struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
{
unsigned int i;
int ring_enq;
uint16_t d_pkts = 0;
struct rte_mbuf *dup_bufs[nb_pkts];
- struct pdump_rxtx_cbs *cbs;
+ struct pdump_rxtx_cbs *cbs = user_params;
+ uint64_t ts;
struct rte_ring *ring;
struct rte_mempool *mp;
struct rte_mbuf *p;
+ uint64_t bpf_rc[nb_pkts];
+
+ if (cbs->filter &&
+ !rte_bpf_exec_burst(cbs->filter, (void **)pkts, bpf_rc, nb_pkts))
+ return; /* our work here is done */
- cbs = user_params;
+ ts = rte_get_tsc_cycles();
ring = cbs->ring;
mp = cbs->mp;
for (i = 0; i < nb_pkts; i++) {
- p = rte_pktmbuf_copy(pkts[i], mp, 0, UINT32_MAX);
- if (p)
+ /*
+ * Similar behavior to rte_bpf_eth callback.
+ * if BPF program returns zero value for a given packet,
+ * then it will be ignored.
+ */
+ if (cbs->filter && bpf_rc[i] == 0)
+ continue;
+
+ /*
+ * If using pcapng then want to wrap packets
+ * otherwise a simple copy.
+ */
+ if (cbs->ver == PDUMP_CLIENT_PCAPNG)
+ p = rte_pcapng_copy(port_id, queue,
+ pkts[i], mp, cbs->snaplen,
+ ts, direction);
+ else
+ p = rte_pktmbuf_copy(pkts[i], mp, 0, cbs->snaplen);
+
+ if (likely(p != NULL))
dup_bufs[d_pkts++] = p;
}
+ cbs->stats.accepted += d_pkts;
ring_enq = rte_ring_enqueue_burst(ring, (void *)dup_bufs, d_pkts, NULL);
if (unlikely(ring_enq < d_pkts)) {
unsigned int drops = d_pkts - ring_enq;
+ cbs->stats.missed += drops;
PDUMP_LOG(DEBUG,
"only %d of packets enqueued to ring\n", ring_enq);
rte_pktmbuf_free_bulk(&dup_bufs[ring_enq], drops);
@@ -100,43 +128,50 @@ pdump_copy(struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
}
static uint16_t
-pdump_rx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_rx(uint16_t port, uint16_t queue,
struct rte_mbuf **pkts, uint16_t nb_pkts,
uint16_t max_pkts __rte_unused,
void *user_params)
{
- pdump_copy(pkts, nb_pkts, user_params);
+ pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_IN,
+ pkts, nb_pkts, user_params);
return nb_pkts;
}
static uint16_t
-pdump_tx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_tx(uint16_t port, uint16_t queue,
struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
{
- pdump_copy(pkts, nb_pkts, user_params);
+ pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_OUT,
+ pkts, nb_pkts, user_params);
return nb_pkts;
}
static int
-pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
- struct rte_ring *ring, struct rte_mempool *mp,
- uint16_t operation)
+pdump_register_rx_callbacks(enum pdump_version ver,
+ uint16_t end_q, uint16_t port, uint16_t queue,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ uint16_t operation, uint32_t snaplen)
{
uint16_t qid;
- struct pdump_rxtx_cbs *cbs = NULL;
qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
for (; qid < end_q; qid++) {
- cbs = &rx_cbs[port][qid];
- if (cbs && operation == ENABLE) {
+ struct pdump_rxtx_cbs *cbs = &rx_cbs[port][qid];
+
+ if (operation == ENABLE) {
if (cbs->cb) {
PDUMP_LOG(ERR,
"rx callback for port=%d queue=%d, already exists\n",
port, qid);
return -EEXIST;
}
+ cbs->ver = ver;
cbs->ring = ring;
cbs->mp = mp;
+ cbs->snaplen = snaplen;
+ memset(&cbs->stats, 0, sizeof(cbs->stats));
+
cbs->cb = rte_eth_add_first_rx_callback(port, qid,
pdump_rx, cbs);
if (cbs->cb == NULL) {
@@ -145,8 +180,7 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
rte_errno);
return rte_errno;
}
- }
- if (cbs && operation == DISABLE) {
+ } else if (operation == DISABLE) {
int ret;
if (cbs->cb == NULL) {
@@ -170,26 +204,30 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
}
static int
-pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
- struct rte_ring *ring, struct rte_mempool *mp,
- uint16_t operation)
+pdump_register_tx_callbacks(enum pdump_version ver,
+ uint16_t end_q, uint16_t port, uint16_t queue,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ uint16_t operation, uint32_t snaplen)
{
uint16_t qid;
- struct pdump_rxtx_cbs *cbs = NULL;
qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
for (; qid < end_q; qid++) {
- cbs = &tx_cbs[port][qid];
- if (cbs && operation == ENABLE) {
+ struct pdump_rxtx_cbs *cbs = &tx_cbs[port][qid];
+
+ if (operation == ENABLE) {
if (cbs->cb) {
PDUMP_LOG(ERR,
"tx callback for port=%d queue=%d, already exists\n",
port, qid);
return -EEXIST;
}
+ cbs->ver = ver;
cbs->ring = ring;
cbs->mp = mp;
+ cbs->snaplen = snaplen;
+ memset(&cbs->stats, 0, sizeof(cbs->stats));
cbs->cb = rte_eth_add_tx_callback(port, qid, pdump_tx,
cbs);
if (cbs->cb == NULL) {
@@ -198,8 +236,7 @@ pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
rte_errno);
return rte_errno;
}
- }
- if (cbs && operation == DISABLE) {
+ } else if (operation == DISABLE) {
int ret;
if (cbs->cb == NULL) {
@@ -233,32 +270,25 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
struct rte_ring *ring;
struct rte_mempool *mp;
+ if (!(p->ver == PDUMP_CLIENT_LEGACY ||
+ p->ver == PDUMP_CLIENT_PCAPNG)) {
+ PDUMP_LOG(ERR,
+ "incorrect client version %u\n", p->ver);
+ return -EINVAL;
+ }
+
flags = p->flags;
operation = p->op;
- if (operation == ENABLE) {
- ret = rte_eth_dev_get_port_by_name(p->data.en_v1.device,
- &port);
- if (ret < 0) {
- PDUMP_LOG(ERR,
- "failed to get port id for device id=%s\n",
- p->data.en_v1.device);
- return -EINVAL;
- }
- queue = p->data.en_v1.queue;
- ring = p->data.en_v1.ring;
- mp = p->data.en_v1.mp;
- } else {
- ret = rte_eth_dev_get_port_by_name(p->data.dis_v1.device,
- &port);
- if (ret < 0) {
- PDUMP_LOG(ERR,
- "failed to get port id for device id=%s\n",
- p->data.dis_v1.device);
- return -EINVAL;
- }
- queue = p->data.dis_v1.queue;
- ring = p->data.dis_v1.ring;
- mp = p->data.dis_v1.mp;
+ queue = p->queue;
+ ring = p->ring;
+ mp = p->mp;
+
+ ret = rte_eth_dev_get_port_by_name(p->device, &port);
+ if (ret < 0) {
+ PDUMP_LOG(ERR,
+ "failed to get port id for device id=%s\n",
+ p->device);
+ return -EINVAL;
}
/* validation if packet capture is for all queues */
@@ -296,8 +326,8 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
/* register RX callback */
if (flags & RTE_PDUMP_FLAG_RX) {
end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_rx_q : queue + 1;
- ret = pdump_register_rx_callbacks(end_q, port, queue, ring, mp,
- operation);
+ ret = pdump_register_rx_callbacks(p->ver, end_q, port, queue, ring, mp,
+ operation, p->snaplen);
if (ret < 0)
return ret;
}
@@ -305,8 +335,8 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
/* register TX callback */
if (flags & RTE_PDUMP_FLAG_TX) {
end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_tx_q : queue + 1;
- ret = pdump_register_tx_callbacks(end_q, port, queue, ring, mp,
- operation);
+ ret = pdump_register_tx_callbacks(p->ver, end_q, port, queue, ring, mp,
+ operation, p->snaplen);
if (ret < 0)
return ret;
}
@@ -349,6 +379,8 @@ rte_pdump_init(void)
{
int ret;
+ rte_pcapng_init();
+
ret = rte_mp_action_register(PDUMP_MP, pdump_server);
if (ret && rte_errno != ENOTSUP)
return -1;
@@ -392,14 +424,21 @@ pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp)
static int
pdump_validate_flags(uint32_t flags)
{
- if (flags != RTE_PDUMP_FLAG_RX && flags != RTE_PDUMP_FLAG_TX &&
- flags != RTE_PDUMP_FLAG_RXTX) {
+ if ((flags & RTE_PDUMP_FLAG_RXTX) == 0) {
PDUMP_LOG(ERR,
"invalid flags, should be either rx/tx/rxtx\n");
rte_errno = EINVAL;
return -1;
}
+ /* mask off the flags we know about */
+ if (flags & ~(RTE_PDUMP_FLAG_RXTX | RTE_PDUMP_FLAG_PCAPNG)) {
+ PDUMP_LOG(ERR,
+ "unknown flags: %#x\n", flags);
+ rte_errno = ENOTSUP;
+ return -1;
+ }
+
return 0;
}
@@ -426,12 +465,12 @@ pdump_validate_port(uint16_t port, char *name)
}
static int
-pdump_prepare_client_request(char *device, uint16_t queue,
- uint32_t flags,
- uint16_t operation,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+pdump_prepare_client_request(const char *device, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ uint16_t operation,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf *filter)
{
int ret = -1;
struct rte_mp_msg mp_req, *mp_rep;
@@ -440,23 +479,23 @@ pdump_prepare_client_request(char *device, uint16_t queue,
struct pdump_request *req = (struct pdump_request *)mp_req.param;
struct pdump_response *resp;
- req->ver = 1;
- req->flags = flags;
+ memset(req, 0, sizeof(*req));
+ if (flags & RTE_PDUMP_FLAG_PCAPNG)
+ req->ver = PDUMP_CLIENT_PCAPNG;
+ else
+ req->ver = PDUMP_CLIENT_LEGACY;
+
+ req->flags = flags & RTE_PDUMP_FLAG_RXTX;
req->op = operation;
+ req->queue = queue;
+ strlcpy(req->device, device,sizeof(req->device));
+
if ((operation & ENABLE) != 0) {
- strlcpy(req->data.en_v1.device, device,
- sizeof(req->data.en_v1.device));
- req->data.en_v1.queue = queue;
- req->data.en_v1.ring = ring;
- req->data.en_v1.mp = mp;
- req->data.en_v1.filter = filter;
- } else {
- strlcpy(req->data.dis_v1.device, device,
- sizeof(req->data.dis_v1.device));
- req->data.dis_v1.queue = queue;
- req->data.dis_v1.ring = NULL;
- req->data.dis_v1.mp = NULL;
- req->data.dis_v1.filter = NULL;
+ req->queue = queue;
+ req->ring = ring;
+ req->mp = mp;
+ req->filter = filter;
+ req->snaplen = snaplen;
}
strlcpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
@@ -477,11 +516,17 @@ pdump_prepare_client_request(char *device, uint16_t queue,
return ret;
}
-int
-rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+/*
+ * There are two versions of this function, because although original API
+ * left place holder for future filter, it never checked the value.
+ * Therefore the API can't depend on application passing a non
+ * bogus value.
+ */
+static int
+pdump_enable(uint16_t port, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring, struct rte_mempool *mp,
+ const struct rte_bpf *filter)
{
int ret;
char name[RTE_DEV_NAME_MAX_LEN];
@@ -496,20 +541,42 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(name, queue, flags,
- ENABLE, ring, mp, filter);
+ if (snaplen == 0)
+ snaplen = UINT32_MAX;
- return ret;
+ return pdump_prepare_client_request(name, queue, flags, snaplen,
+ ENABLE, ring, mp, filter);
}
int
-rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
- uint32_t flags,
- struct rte_ring *ring,
- struct rte_mempool *mp,
- void *filter)
+rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ void *filter __rte_unused)
{
- int ret = 0;
+ return pdump_enable(port, queue, flags, 0,
+ ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf(uint16_t port, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf *filter)
+{
+ return pdump_enable(port, queue, flags, snaplen,
+ ring, mp, filter);
+}
+
+static int
+pdump_enable_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf *filter)
+{
+ int ret;
ret = pdump_validate_ring_mp(ring, mp);
if (ret < 0)
@@ -518,10 +585,30 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(device_id, queue, flags,
- ENABLE, ring, mp, filter);
+ return pdump_prepare_client_request(device_id, queue, flags, snaplen,
+ ENABLE, ring, mp, filter);
+}
- return ret;
+int
+rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
+ uint32_t flags,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ void *filter __rte_unused)
+{
+ return pdump_enable_by_deviceid(device_id, queue, flags, 0,
+ ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf *filter)
+{
+ return pdump_enable_by_deviceid(device_id, queue, flags, snaplen,
+ ring, mp, filter);
}
int
@@ -537,8 +624,8 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags)
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(name, queue, flags,
- DISABLE, NULL, NULL, NULL);
+ ret = pdump_prepare_client_request(name, queue, flags, 0,
+ DISABLE, NULL, NULL, NULL);
return ret;
}
@@ -553,8 +640,73 @@ rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
if (ret < 0)
return ret;
- ret = pdump_prepare_client_request(device_id, queue, flags,
- DISABLE, NULL, NULL, NULL);
+ ret = pdump_prepare_client_request(device_id, queue, flags, 0,
+ DISABLE, NULL, NULL, NULL);
return ret;
}
+
+static void
+pdump_sum_stats(struct rte_pdump_stats *total,
+ const struct pdump_rxtx_cbs *cbs,
+ uint16_t nq)
+{
+ uint16_t qid;
+
+ memset(total, 0, sizeof(*total));
+
+ for (qid = 0; qid < nq; qid++) {
+ total->received += cbs[qid].stats.received;
+ total->missed += cbs[qid].stats.missed;
+ total->accepted += cbs[qid].stats.accepted;
+ }
+}
+
+int
+rte_pdump_get_stats(uint16_t port, uint16_t queue,
+ struct rte_pdump_stats *rx_stats,
+ struct rte_pdump_stats *tx_stats)
+{
+ uint16_t nb_rx_q = 0, nb_tx_q = 0;
+
+ if (port >= RTE_MAX_ETHPORTS) {
+ PDUMP_LOG(ERR, "Invalid port id %u\n", port);
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ if (queue == RTE_PDUMP_ALL_QUEUES) {
+ struct rte_eth_dev_info dev_info;
+ int ret;
+
+ ret = rte_eth_dev_info_get(port, &dev_info);
+ if (ret != 0) {
+ PDUMP_LOG(ERR,
+ "Error during getting device (port %u) info: %s\n",
+ port, strerror(-ret));
+ return ret;
+ }
+ nb_rx_q = dev_info.nb_rx_queues;
+ nb_tx_q = dev_info.nb_tx_queues;
+ } else if (queue >= RTE_MAX_QUEUES_PER_PORT) {
+ PDUMP_LOG(ERR, "Invalid queue id %u\n", queue);
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ if (rx_stats) {
+ if (queue == RTE_PDUMP_ALL_QUEUES)
+ pdump_sum_stats(rx_stats, &rx_cbs[port][0], nb_rx_q);
+ else
+ *rx_stats = rx_cbs[port][queue].stats;
+ }
+
+ if (tx_stats) {
+ if (queue == RTE_PDUMP_ALL_QUEUES)
+ pdump_sum_stats(tx_stats, &tx_cbs[port][0], nb_tx_q);
+ else
+ *tx_stats = tx_cbs[port][queue].stats;
+ }
+
+ return 0;
+}
diff --git a/lib/pdump/rte_pdump.h b/lib/pdump/rte_pdump.h
index 6b00fc17aeb2..992331fddffb 100644
--- a/lib/pdump/rte_pdump.h
+++ b/lib/pdump/rte_pdump.h
@@ -15,6 +15,7 @@
#include <stdint.h>
#include <rte_mempool.h>
#include <rte_ring.h>
+#include <rte_bpf.h>
#ifdef __cplusplus
extern "C" {
@@ -26,7 +27,9 @@ enum {
RTE_PDUMP_FLAG_RX = 1, /* receive direction */
RTE_PDUMP_FLAG_TX = 2, /* transmit direction */
/* both receive and transmit directions */
- RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX)
+ RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX),
+
+ RTE_PDUMP_FLAG_PCAPNG = 4, /* format for pcapng */
};
/**
@@ -68,7 +71,7 @@ rte_pdump_uninit(void);
* @param mp
* mempool on to which original packets will be mirrored or duplicated.
* @param filter
- * place holder for packet filtering.
+ * Unused should be NULL.
*
* @return
* 0 on success, -1 on error, rte_errno is set accordingly.
@@ -80,6 +83,42 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
struct rte_mempool *mp,
void *filter);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given port and queue with filtering.
+ *
+ * @param port
+ * port on which packet capturing should be enabled.
+ * @param queue
+ * queue of a given port on which packet capturing should be enabled.
+ * users should pass on value UINT16_MAX to enable packet capturing on all
+ * queues of a given port.
+ * @param flags
+ * flags specifies RTE_PDUMP_FLAG_RX/RTE_PDUMP_FLAG_TX/RTE_PDUMP_FLAG_RXTX
+ * on which packet capturing should be enabled for a given port and queue.
+ * @param snaplen
+ * snapshot length. No more than snaplen bytes of the network packet
+ * will be saved. Use 0 or 262144 to capture all of the packet.
+ * @param ring
+ * ring on which captured packets will be enqueued for user.
+ * @param mp
+ * mempool on to which original packets will be mirrored or duplicated.
+ * @param filter
+ * BPF filter to run to filter packes (can be NULL)
+ *
+ * @return
+ * 0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf(uint16_t port, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf *filter);
+
/**
* Disables packet capturing on given port and queue.
*
@@ -118,7 +157,7 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags);
* @param mp
* mempool on to which original packets will be mirrored or duplicated.
* @param filter
- * place holder for packet filtering.
+ * unused should be NULL
*
* @return
* 0 on success, -1 on error, rte_errno is set accordingly.
@@ -131,6 +170,44 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
struct rte_mempool *mp,
void *filter);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given device id and queue with filtering.
+ * device_id can be name or pci address of device.
+ *
+ * @param device_id
+ * device id on which packet capturing should be enabled.
+ * @param queue
+ * queue of a given device id on which packet capturing should be enabled.
+ * users should pass on value UINT16_MAX to enable packet capturing on all
+ * queues of a given device id.
+ * @param flags
+ * flags specifies RTE_PDUMP_FLAG_RX/RTE_PDUMP_FLAG_TX/RTE_PDUMP_FLAG_RXTX
+ * on which packet capturing should be enabled for a given port and queue.
+ * @param snaplen
+ * snapshot length. No more than snaplen bytes of the network packet
+ * will be saved. Use 0 or 262144 to capture all of the packet.
+ * @param ring
+ * ring on which captured packets will be enqueued for user.
+ * @param mp
+ * mempool on to which original packets will be mirrored or duplicated.
+ * @param filter
+ * BPF filter to run to filter packes (can be NULL)
+ *
+ * @return
+ * 0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+ uint32_t flags, uint32_t snaplen,
+ struct rte_ring *ring,
+ struct rte_mempool *mp,
+ const struct rte_bpf *filter);
+
+
/**
* Disables packet capturing on given device_id and queue.
* device_id can be name or pci address of device.
@@ -153,6 +230,40 @@ int
rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
uint32_t flags);
+struct rte_pdump_stats {
+ uint64_t received; /**< callback called */
+ uint64_t accepted; /**< allowed by filter */
+ uint64_t missed; /**< ring full */
+};
+
+/**
+ * Query packet capture statistics.
+ *
+ * @param port
+ * port on which packet capturing should be enabled.
+ * @param queue
+ * queue of a given port on which packet capturing should be enabled.
+ * users should pass on value UINT16_MAX to enable packet capturing on all
+ * queues of a given port.
+ * @param rx_stats
+ * A pointer to a structure of type *rte_pdump_stats* to be filled with
+ * the values of the capture statistics:
+ * - *received* with the total of received packets.
+ * - *accepted* with the total of packets matched by the filter.
+ * - *missed* with the total of packets missed because of ring full.
+ * @param tx_stats
+ * - *received* with the total of transmitted packets.
+ * - *accepted* with the total of packets matched by the filter.
+ * - *missed* with the total of packets missed because of ring full.
+ * @return
+ * Zero if successful. Non-zero otherwise.
+ */
+__rte_experimental
+int
+rte_pdump_get_stats(uint16_t port, uint16_t queue,
+ struct rte_pdump_stats *rx_stats,
+ struct rte_pdump_stats *tx_stats);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/pdump/version.map b/lib/pdump/version.map
index f0a9d12c9a9e..b4c20b56f237 100644
--- a/lib/pdump/version.map
+++ b/lib/pdump/version.map
@@ -10,3 +10,11 @@ DPDK_22 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ rte_pdump_enable_bpf;
+ rte_pdump_enable_bpf_by_deviceid;
+ rte_pdump_get_stats;
+};
--
2.30.2
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH v8] doc: add release milestones definition
2021-08-26 10:11 5% ` [dpdk-dev] [PATCH v8] " Thomas Monjalon
@ 2021-09-02 16:33 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-09-02 16:33 UTC (permalink / raw)
To: Thomas Monjalon, dev
Cc: david.marchand, Asaf Penso, John McNamara, Ajit Khaparde,
Bruce Richardson
On 8/26/2021 11:11 AM, Thomas Monjalon wrote:
> From: Asaf Penso <asafp@nvidia.com>
>
> Adding more information about the release milestones.
> This includes the scope of change, expectations, etc.
>
> Signed-off-by: Asaf Penso <asafp@nvidia.com>
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> Acked-by: John McNamara <john.mcnamara@intel.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
> v2: fix styling format and add content in the commit message
> v3: change punctuation and avoid plural form when unneeded
> v4: avoid abbreviations, "Priority" in -rc, and reword as John suggests
> v5: note that release candidates may vary
> v6: merge RFC and proposal deadline, add roadmap link and reduce duplication
> v7: make expectations clearer and stricter
> v8: add tests, more fixes, maintainers approval and new API rules
> ---
> doc/guides/contributing/patches.rst | 85 +++++++++++++++++++++++++++--
> 1 file changed, 80 insertions(+), 5 deletions(-)
>
> diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
> index b9cc6e67ae..ef784d0f99 100644
> --- a/doc/guides/contributing/patches.rst
> +++ b/doc/guides/contributing/patches.rst
> @@ -164,6 +164,10 @@ Make your planned changes in the cloned ``dpdk`` repo. Here are some guidelines
> the :doc:`ABI policy <abi_policy>` and :ref:`ABI versioning <abi_versioning>`
> guides. New external functions should also be added in alphabetical order.
>
> +* Any new API function should be used in ``/app`` test directory.
> +
> +* When introducing a new device API, at least one driver should implement it.
> +
ack
> * Important changes will require an addition to the release notes in ``doc/guides/rel_notes/``.
> See the :ref:`Release Notes section of the Documentation Guidelines <doc_guidelines>` for details.
>
> @@ -177,6 +181,8 @@ Make your planned changes in the cloned ``dpdk`` repo. Here are some guidelines
> * Add documentation, if relevant, in the form of Doxygen comments or a User Guide in RST format.
> See the :ref:`Documentation Guidelines <doc_guidelines>`.
>
> +* Code and related documentation must be updated atomically in the same patch.
> +
ack
> Once the changes have been made you should commit them to your local repo.
>
> For small changes, that do not require specific explanations, it is better to keep things together in the
> @@ -185,11 +191,6 @@ Larger changes that require different explanations should be separated into logi
> A good way of thinking about whether a patch should be split is to consider whether the change could be
> applied without dependencies as a backport.
>
> -It is better to keep the related documentation changes in the same patch
> -file as the code, rather than one big documentation patch at the end of a
> -patchset. This makes it easier for future maintenance and development of the
> -code.
> -
> As a guide to how patches should be structured run ``git log`` on similar files.
>
>
> @@ -663,3 +664,77 @@ patch accepted. The general cycle for patch review and acceptance is:
> than rework of the original.
> * Trivial patches may be merged sooner than described above at the tree committer's
> discretion.
> +
> +
> +Milestones definition
> +---------------------
> +
> +Each DPDK release has milestones that help everyone to converge to the release date.
> +The following is a list of these milestones together with
> +concrete definitions and expectations for a typical release cycle.
> +An average cycle lasts 3 months and have 4 release candidates in the last month.
> +Test reports are expected to be received after each release candidate.
> +The number and expectations of release candidates might vary slightly.
> +The schedule is updated in the `roadmap <https://core.dpdk.org/roadmap/#dates>`_.
> +
> +.. note::
> + Sooner is always better. Deadlines are not ideal dates.
> +
> + Integration is never guaranteed but everyone can help.
> +
> +Roadmap
> +~~~~~~~
> +
> +* Announce new features in libraries, drivers, applications, and examples.
> +* To be published before the first day of the release cycle.
> +
> +Proposal Deadline
> +~~~~~~~~~~~~~~~~~
> +
> +* Must send an RFC (Request For Comments) or a complete patch of new features.
> +* Early RFC gives time for design review before complete implementation.
> +* Should include at least the API changes in libraries and applications.
> +* Library code should be quite complete at the deadline.
> +* Nice to have: driver implementation, example code, and documentation.
> +
> +rc1
> +~~~
> +
> +* Priority: libraries. No library feature should be accepted after -rc1.
> +* API changes or additions must be implemented in libraries.
> +* The API must include Doxygen documentation
> + and be part of the relevant .rst files (library-specific and release notes).
> +* API should be used in a test application (``/app``).
> +* At least one PMD should implement the API.
> + It may be a draft sent in a separate series.
> +* The above should be sent to the mailing list at least 2 weeks before -rc1
> + to give time for review and maintainers approval.
> +* If no review after 10 days, a reminder should be sent.
> +* Nice to have: example code (``/examples``)
> +
> +rc2
> +~~~
> +
> +* Priority: drivers. No driver feature should be accepted after -rc2.
> +* A driver change must include documentation
> + in the relevant .rst files (driver-specific and release notes).
> +* The above should be sent to the mailing list at least 2 weeks before -rc2.
Is 'the above' driver changes? It is good the have all driver changes two weeks
before -rc2 but taking into account that overall time between -rc1 & -rc2 is 3/4
weeks, two weeks before -rc2 may be hard to do practically.
We may go with this and try and evaluate later or reduce the requirement to one
week before -rc2, what do you think?
> +* Any issue found in -rc1 should be fixed.
> +
> +rc3
> +~~~
> +
> +* Priority: applications. No application feature should be accepted after -rc3.
> +* New functionality that does not depend on libraries update
> + can be integrated as part of -rc3.
> +* The application change must include documentation in the relevant .rst files
> + (application-specific and release notes if significant).
> +* Libraries and drivers cleanup are allowed.
> +* Small driver reworks.
> +* Critical and minor bug fixes.
As mentioned before, my concern is this may create false impression that bugs
are fixed only in this phase. What about remove this line completely and update
below -rc4 one as 'Critical bug fixes only.'? I think that makes intention more
clear.
> +
> +rc4
> +~~~
> +
> +* Documentation updates.
> +* Critical bug fixes.
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] doc: announce change in vfio dma mapping
2021-09-02 9:50 3% ` Ferruh Yigit
@ 2021-09-02 16:13 0% ` Kinsella, Ray
2021-09-06 8:51 0% ` Ding, Xuan
0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-09-02 16:13 UTC (permalink / raw)
To: Ferruh Yigit, Burakov, Anatoly, Ding, Xuan, dev
Cc: maxime.coquelin, Xia, Chenbo, Hu, Jiayu, Richardson, Bruce,
Thomas Monjalon
On 02/09/2021 10:50, Ferruh Yigit wrote:
> On 9/1/2021 2:25 PM, Burakov, Anatoly wrote:
>> On 01-Sep-21 12:42 PM, Ferruh Yigit wrote:
>>> On 9/1/2021 12:01 PM, Burakov, Anatoly wrote:
>>>> On 01-Sep-21 10:56 AM, Ferruh Yigit wrote:
>>>>> On 9/1/2021 2:41 AM, Ding, Xuan wrote:
>>>>>> Hi Ferruh,
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>>>>>>> Sent: Wednesday, September 1, 2021 12:01 AM
>>>>>>> To: Ding, Xuan <xuan.ding@intel.com>; dev@dpdk.org; Burakov, Anatoly
>>>>>>> <anatoly.burakov@intel.com>
>>>>>>> Cc: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>; Hu,
>>>>>>> Jiayu <jiayu.hu@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>
>>>>>>> Subject: Re: [PATCH] doc: announce change in vfio dma mapping
>>>>>>>
>>>>>>> On 8/31/2021 2:10 PM, Xuan Ding wrote:
>>>>>>>> Currently, the VFIO subsystem will compact adjacent DMA regions for the
>>>>>>>> purposes of saving space in the internal list of mappings. This has a
>>>>>>>> side effect of compacting two separate mappings that just happen to be
>>>>>>>> adjacent in memory. Since VFIO implementation on IA platforms also does
>>>>>>>> not allow partial unmapping of memory mapped for DMA, the current
>>>>>>> DPDK
>>>>>>>> VFIO implementation will prevent unmapping of accidentally adjacent
>>>>>>>> maps even though it could have been unmapped [1].
>>>>>>>>
>>>>>>>> The proper fix for this issue is to change the VFIO DMA mapping API to
>>>>>>>> also include page size, and always map memory page-by-page.
>>>>>>>>
>>>>>>>> [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
>>>>>>>>
>>>>>>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
>>>>>>>> ---
>>>>>>>> doc/guides/rel_notes/deprecation.rst | 3 +++
>>>>>>>> 1 file changed, 3 insertions(+)
>>>>>>>>
>>>>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>>>>> index 76a4abfd6b..1234420caf 100644
>>>>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>>>>> @@ -287,3 +287,6 @@ Deprecation Notices
>>>>>>>> reserved bytes to 2 (from 3), and use 1 byte to indicate warnings and
>>>>>>> other
>>>>>>>> information from the crypto/security operation. This field will be
>>>>>>>> used to
>>>>>>>> communicate events such as soft expiry with IPsec in lookaside mode.
>>>>>>>> +
>>>>>>>> +* vfio: the functions `rte_vfio_container_dma_map` will be amended to
>>>>>>>> + include page size. This change is targeted for DPDK 22.02.
>>>>>>>>
>>>>>>>
>>>>>>> Is this means adding a new parameter to API?
>>>>>>> If so this is an ABI/API break and we can't do this change in the 22.02.
>>>>>>
>>>>>> Our original plan is add a new parameter in order not to use a new function
>>>>>> name, so you mean, any changes to the API can only be done in the LTS version?
>>>>>> If so, we can only add a new API and retire the old one in 22.11.
>>>>>>
>>>>>
>>>>> We can add a new API anytime. Adding new parameter to an existing API can be
>>>>> done on the ABI break release.
>>>>
>>>> So, 22.11 then?
>>>>
>>>
>>> Yes.
>>>
>>>>>
>>>>> You can add the new API in this release, and start using it.
>>>>> And mark the old API as deprecated in this release. This lets existing binaries
>>>>> to keep using it, but app needs to switch to new API for compilation.
>>>>> Old API can be removed on 22.11, and you will need a deprecation notice before
>>>>> 22.11 for it.
>>>>>
>>>>> Is above plan works for you?
>>>>>
>>>>
>>>> We have slightly rethought our approach, and the functionality that Xuan
>>>> requires does not rely on this API. They can, for all intents and purposes, be
>>>> considered unrelated issues.
>>>>
>>>> I still think it's a good idea to update the API that way, so I would like to do
>>>> that, and if we have to wait until 22.11 to fix it, I'm OK with that. Since
>>>> there no longer is any urgency here, it's acceptable to wait for the next LTS to
>>>> break it.
>>>>
>>>
>>> Got it.
>>>
>>> As far as I understand, main motivation in techboard decision was to prevent the
>>> ABI break as much as possible (main reason of decision wasn't deprecation notice
>>> being late). But if the correct thing to do is to rename the API (and break the
>>> ABI), I don't see the benefit to wait one more year, it is just delaying the
>>> impact and adding overhead to us.
>>> I am for being pragmatic and doing the change in this release if API rename is
>>> better option, perhaps we can visit the issue again in techboard.
>>>
>>> Can you please describe why renaming API is better option, comparing to adding
>>> new API with new parameter?
>>
>> I take it you meant "why renaming API *isn't* a better option".
>>
>> The problem we're solving is that the API in question does not know about page
>> sizes and thus can't map segments page-by-page. I mean I /guess/ we could have
>> two API's (one paged, one not paged), but then we get into all kinds of hairy
>> things about the API leaking the details of underlying platform.
>>
>> Bottom line: i like current API function name. It's concise, it's descriptive.
>> It's only missing a parameter, which i would like to add. A rename has been
>> suggested (deprecate old API, add new API with a different name, and with added
>> parameter), but honestly, I don't see why we have to do that because this is
>> predicated upon the assumption that we *can't* break ABI at all, under any
>> circumstances.
>>
>> Can you please explain to me what is wrong with keeping a versioned symbol?
>> Like, keep the old function around to keep ABI compatibility, but break the API
>> compatibility for those who target 22.02 or later? That's what symbol versioning
>> is *for*, is it not?
>>
>
> Nothing wrong with symbol versioning, indeed that is preferred way if it works
> for you, I didn't get that symbol versioning is planned.
>
> @Ray,
> Since symbol versioning is planned, ABI won't break, but API will change, can
> this change be done in this release without deprecation notice?
Yes - I would think so.
Since we are going to the effort of using symbol versioning nothing is being depreciated as such (yet).
> Later we can have a deprecation notice to remove old symbol on 22.11.
>
> Thanks,
> ferruh
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v5 0/5] eal: enable global device syntax by default
@ 2021-09-02 14:46 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-09-02 14:46 UTC (permalink / raw)
To: Xueming Li; +Cc: Gaetan Rivet, dev, Asaf Penso, mdr, david.marchand
14/04/2021 21:49, Thomas Monjalon:
> 13/04/2021 05:14, Xueming Li:
> > Xueming Li (5):
> > devargs: unify scratch buffer storage
> > devargs: fix memory leak on parsing error
> > kvargs: add get by key function
> > bus: add device arguments name parsing API
> > devargs: parse global device syntax
>
> The patch 4 adds a new callback in rte_bus.
> I thought about it during the whole day and I don't see any good way
> to merge it without breaking the ABI compatibility.
>
> Only first 3 patches are applied for now, thanks.
Last 2 patches are applied for 21.11, thanks.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] doc: announce change in vfio dma mapping
2021-09-01 13:25 4% ` Burakov, Anatoly
@ 2021-09-02 9:50 3% ` Ferruh Yigit
2021-09-02 16:13 0% ` Kinsella, Ray
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-09-02 9:50 UTC (permalink / raw)
To: Burakov, Anatoly, Ding, Xuan, dev, Ray Kinsella
Cc: maxime.coquelin, Xia, Chenbo, Hu, Jiayu, Richardson, Bruce,
Thomas Monjalon
On 9/1/2021 2:25 PM, Burakov, Anatoly wrote:
> On 01-Sep-21 12:42 PM, Ferruh Yigit wrote:
>> On 9/1/2021 12:01 PM, Burakov, Anatoly wrote:
>>> On 01-Sep-21 10:56 AM, Ferruh Yigit wrote:
>>>> On 9/1/2021 2:41 AM, Ding, Xuan wrote:
>>>>> Hi Ferruh,
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>>>>>> Sent: Wednesday, September 1, 2021 12:01 AM
>>>>>> To: Ding, Xuan <xuan.ding@intel.com>; dev@dpdk.org; Burakov, Anatoly
>>>>>> <anatoly.burakov@intel.com>
>>>>>> Cc: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>; Hu,
>>>>>> Jiayu <jiayu.hu@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>
>>>>>> Subject: Re: [PATCH] doc: announce change in vfio dma mapping
>>>>>>
>>>>>> On 8/31/2021 2:10 PM, Xuan Ding wrote:
>>>>>>> Currently, the VFIO subsystem will compact adjacent DMA regions for the
>>>>>>> purposes of saving space in the internal list of mappings. This has a
>>>>>>> side effect of compacting two separate mappings that just happen to be
>>>>>>> adjacent in memory. Since VFIO implementation on IA platforms also does
>>>>>>> not allow partial unmapping of memory mapped for DMA, the current
>>>>>> DPDK
>>>>>>> VFIO implementation will prevent unmapping of accidentally adjacent
>>>>>>> maps even though it could have been unmapped [1].
>>>>>>>
>>>>>>> The proper fix for this issue is to change the VFIO DMA mapping API to
>>>>>>> also include page size, and always map memory page-by-page.
>>>>>>>
>>>>>>> [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
>>>>>>>
>>>>>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
>>>>>>> ---
>>>>>>> doc/guides/rel_notes/deprecation.rst | 3 +++
>>>>>>> 1 file changed, 3 insertions(+)
>>>>>>>
>>>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>>>> index 76a4abfd6b..1234420caf 100644
>>>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>>>> @@ -287,3 +287,6 @@ Deprecation Notices
>>>>>>> reserved bytes to 2 (from 3), and use 1 byte to indicate warnings and
>>>>>> other
>>>>>>> information from the crypto/security operation. This field will be
>>>>>>> used to
>>>>>>> communicate events such as soft expiry with IPsec in lookaside mode.
>>>>>>> +
>>>>>>> +* vfio: the functions `rte_vfio_container_dma_map` will be amended to
>>>>>>> + include page size. This change is targeted for DPDK 22.02.
>>>>>>>
>>>>>>
>>>>>> Is this means adding a new parameter to API?
>>>>>> If so this is an ABI/API break and we can't do this change in the 22.02.
>>>>>
>>>>> Our original plan is add a new parameter in order not to use a new function
>>>>> name, so you mean, any changes to the API can only be done in the LTS version?
>>>>> If so, we can only add a new API and retire the old one in 22.11.
>>>>>
>>>>
>>>> We can add a new API anytime. Adding new parameter to an existing API can be
>>>> done on the ABI break release.
>>>
>>> So, 22.11 then?
>>>
>>
>> Yes.
>>
>>>>
>>>> You can add the new API in this release, and start using it.
>>>> And mark the old API as deprecated in this release. This lets existing binaries
>>>> to keep using it, but app needs to switch to new API for compilation.
>>>> Old API can be removed on 22.11, and you will need a deprecation notice before
>>>> 22.11 for it.
>>>>
>>>> Is above plan works for you?
>>>>
>>>
>>> We have slightly rethought our approach, and the functionality that Xuan
>>> requires does not rely on this API. They can, for all intents and purposes, be
>>> considered unrelated issues.
>>>
>>> I still think it's a good idea to update the API that way, so I would like to do
>>> that, and if we have to wait until 22.11 to fix it, I'm OK with that. Since
>>> there no longer is any urgency here, it's acceptable to wait for the next LTS to
>>> break it.
>>>
>>
>> Got it.
>>
>> As far as I understand, main motivation in techboard decision was to prevent the
>> ABI break as much as possible (main reason of decision wasn't deprecation notice
>> being late). But if the correct thing to do is to rename the API (and break the
>> ABI), I don't see the benefit to wait one more year, it is just delaying the
>> impact and adding overhead to us.
>> I am for being pragmatic and doing the change in this release if API rename is
>> better option, perhaps we can visit the issue again in techboard.
>>
>> Can you please describe why renaming API is better option, comparing to adding
>> new API with new parameter?
>
> I take it you meant "why renaming API *isn't* a better option".
>
> The problem we're solving is that the API in question does not know about page
> sizes and thus can't map segments page-by-page. I mean I /guess/ we could have
> two API's (one paged, one not paged), but then we get into all kinds of hairy
> things about the API leaking the details of underlying platform.
>
> Bottom line: i like current API function name. It's concise, it's descriptive.
> It's only missing a parameter, which i would like to add. A rename has been
> suggested (deprecate old API, add new API with a different name, and with added
> parameter), but honestly, I don't see why we have to do that because this is
> predicated upon the assumption that we *can't* break ABI at all, under any
> circumstances.
>
> Can you please explain to me what is wrong with keeping a versioned symbol?
> Like, keep the old function around to keep ABI compatibility, but break the API
> compatibility for those who target 22.02 or later? That's what symbol versioning
> is *for*, is it not?
>
Nothing wrong with symbol versioning, indeed that is preferred way if it works
for you, I didn't get that symbol versioning is planned.
@Ray,
Since symbol versioning is planned, ABI won't break, but API will change, can
this change be done in this release without deprecation notice?
Later we can have a deprecation notice to remove old symbol on 22.11.
Thanks,
ferruh
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 1/6] bbdev: add capability for CRC16 check
2021-09-01 13:36 3% ` Tom Rix
@ 2021-09-01 15:00 3% ` Chautru, Nicolas
2021-09-11 19:11 0% ` Tom Rix
0 siblings, 1 reply; 200+ results
From: Chautru, Nicolas @ 2021-09-01 15:00 UTC (permalink / raw)
To: Tom Rix, dev, gakhil; +Cc: thomas, hemant.agrawal, Zhang, Mingshan, Joshi, Arun
> -----Original Message-----
> From: Tom Rix <trix@redhat.com>
> Sent: Wednesday, September 1, 2021 6:37 AM
> To: Chautru, Nicolas <nicolas.chautru@intel.com>; dev@dpdk.org;
> gakhil@marvell.com
> Cc: thomas@monjalon.net; hemant.agrawal@nxp.com; Zhang, Mingshan
> <mingshan.zhang@intel.com>; Joshi, Arun <arun.joshi@intel.com>
> Subject: Re: [PATCH v2 1/6] bbdev: add capability for CRC16 check
>
>
> On 8/19/21 2:10 PM, Nicolas Chautru wrote:
> > Adding a missing operation when CRC16
> > is being used for TB CRC check.
> >
> > Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> > ---
> > app/test-bbdev/test_bbdev_vector.c | 2 ++
> > doc/guides/prog_guide/bbdev.rst | 3 +++
> > doc/guides/rel_notes/release_21_11.rst | 1 +
> > lib/bbdev/rte_bbdev_op.h | 34 ++++++++++++++++++--------------
> --
> > 4 files changed, 24 insertions(+), 16 deletions(-)
> >
> > diff --git a/app/test-bbdev/test_bbdev_vector.c
> > b/app/test-bbdev/test_bbdev_vector.c
> > index 614dbd1..8d796b1 100644
> > --- a/app/test-bbdev/test_bbdev_vector.c
> > +++ b/app/test-bbdev/test_bbdev_vector.c
> > @@ -167,6 +167,8 @@
> > *op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK;
> > else if (!strcmp(token, "RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP"))
> > *op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP;
> > + else if (!strcmp(token, "RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK"))
> > + *op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK;
> > else if (!strcmp(token,
> "RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS"))
> > *op_flag_value =
> RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS;
> > else if (!strcmp(token,
> "RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE"))
> > diff --git a/doc/guides/prog_guide/bbdev.rst
> > b/doc/guides/prog_guide/bbdev.rst index 9619280..8bd7cba 100644
> > --- a/doc/guides/prog_guide/bbdev.rst
> > +++ b/doc/guides/prog_guide/bbdev.rst
> > @@ -891,6 +891,9 @@ given below.
> > |RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP |
> > | Set to drop the last CRC bits decoding output |
> >
> > +--------------------------------------------------------------------+
> > +|RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK |
> > +| Set for code block CRC-16 checking |
> > ++--------------------------------------------------------------------+
> > |RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS |
> > | Set for bit-level de-interleaver bypass on input stream |
> >
> > +--------------------------------------------------------------------+
> > diff --git a/doc/guides/rel_notes/release_21_11.rst
> > b/doc/guides/rel_notes/release_21_11.rst
> > index d707a55..69dd518 100644
> > --- a/doc/guides/rel_notes/release_21_11.rst
> > +++ b/doc/guides/rel_notes/release_21_11.rst
> > @@ -84,6 +84,7 @@ API Changes
> > Also, make sure to start the actual text at the margin.
> > =======================================================
> >
> > +* bbdev: Added capability related to more comprehensive CRC options.
> >
> > ABI Changes
> > -----------
> > diff --git a/lib/bbdev/rte_bbdev_op.h b/lib/bbdev/rte_bbdev_op.h index
> > f946842..7c44ddd 100644
> > --- a/lib/bbdev/rte_bbdev_op.h
> > +++ b/lib/bbdev/rte_bbdev_op.h
> > @@ -142,51 +142,53 @@ enum rte_bbdev_op_ldpcdec_flag_bitmasks {
> > RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK = (1ULL << 1),
> > /** Set to drop the last CRC bits decoding output */
> > RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP = (1ULL << 2),
> > + /** Set for transport block CRC-16 checking */
> > + RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK = (1ULL << 3),
>
> Changing these enums will break the abi backwards.
>
> Why not add the new one at the end ?
To keep all the CRC related flags next to each other for better readability and logical clarity. The ABI is still marked as experimental.
>
> Tom
>
> > /** Set for bit-level de-interleaver bypass on Rx stream. */
> > - RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS = (1ULL << 3),
> > + RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS = (1ULL << 4),
> > /** Set for HARQ combined input stream enable. */
> > - RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE = (1ULL << 4),
> > + RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE = (1ULL << 5),
> > /** Set for HARQ combined output stream enable. */
> > - RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE = (1ULL << 5),
> > + RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE = (1ULL << 6),
> > /** Set for LDPC decoder bypass.
> > * RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE must be set.
> > */
> > - RTE_BBDEV_LDPC_DECODE_BYPASS = (1ULL << 6),
> > + RTE_BBDEV_LDPC_DECODE_BYPASS = (1ULL << 7),
> > /** Set for soft-output stream enable */
> > - RTE_BBDEV_LDPC_SOFT_OUT_ENABLE = (1ULL << 7),
> > + RTE_BBDEV_LDPC_SOFT_OUT_ENABLE = (1ULL << 8),
> > /** Set for Rate-Matching bypass on soft-out stream. */
> > - RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS = (1ULL << 8),
> > + RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS = (1ULL << 9),
> > /** Set for bit-level de-interleaver bypass on soft-output stream. */
> > - RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS = (1ULL <<
> 9),
> > + RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS = (1ULL <<
> 10),
> > /** Set for iteration stopping on successful decode condition
> > * i.e. a successful syndrome check.
> > */
> > - RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE = (1ULL << 10),
> > + RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE = (1ULL << 11),
> > /** Set if a device supports decoder dequeue interrupts. */
> > - RTE_BBDEV_LDPC_DEC_INTERRUPTS = (1ULL << 11),
> > + RTE_BBDEV_LDPC_DEC_INTERRUPTS = (1ULL << 12),
> > /** Set if a device supports scatter-gather functionality. */
> > - RTE_BBDEV_LDPC_DEC_SCATTER_GATHER = (1ULL << 12),
> > + RTE_BBDEV_LDPC_DEC_SCATTER_GATHER = (1ULL << 13),
> > /** Set if a device supports input/output HARQ compression. */
> > - RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION = (1ULL << 13),
> > + RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION = (1ULL << 14),
> > /** Set if a device supports input LLR compression. */
> > - RTE_BBDEV_LDPC_LLR_COMPRESSION = (1ULL << 14),
> > + RTE_BBDEV_LDPC_LLR_COMPRESSION = (1ULL << 15),
> > /** Set if a device supports HARQ input from
> > * device's internal memory.
> > */
> > - RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE = (1ULL
> << 15),
> > + RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE = (1ULL
> << 16),
> > /** Set if a device supports HARQ output to
> > * device's internal memory.
> > */
> > - RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE =
> (1ULL << 16),
> > + RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE =
> (1ULL << 17),
> > /** Set if a device supports loop-back access to
> > * HARQ internal memory. Intended for troubleshooting.
> > */
> > - RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK = (1ULL
> << 17),
> > + RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK = (1ULL
> << 18),
> > /** Set if a device includes LLR filler bits in the circular buffer
> > * for HARQ memory. If not set, it is assumed the filler bits are not
> > * in HARQ memory and handled directly by the LDPC decoder.
> > */
> > - RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS = (1ULL <<
> 18)
> > + RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS = (1ULL <<
> 19)
> > };
> >
> > /** Flags for LDPC encoder operation and capability structure */
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v4] ethdev: fix representor port ID search by name
2021-08-31 16:06 3% ` [dpdk-dev] [PATCH v4] " Andrew Rybchenko
2021-08-31 16:32 0% ` Wang, Haiyue
2021-09-01 5:15 0% ` Xing, Beilei
@ 2021-09-01 14:55 0% ` Ferruh Yigit
2 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-09-01 14:55 UTC (permalink / raw)
To: Andrew Rybchenko, Ajit Khaparde, Somnath Kotur, John Daley,
Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko,
Thomas Monjalon
Cc: dev, Viacheslav Galaktionov, declan.doherty
On 8/31/2021 5:06 PM, Andrew Rybchenko wrote:
> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>
> Getting a list of representors from a representor does not make sense.
> Instead, a parent device should be used.
>
Which code is getting list of the representors?
As far as I can see impacted APIs are:
'rte_eth_representor_id_get()'
'rte_eth_representor_info_get()'
Which are now getting 'parent_port_id' as argument, instead of representro port id.
'rte_eth_representor_info_get()' is using 'representor_info_get()' dev_ops,
which is only implemented by 'mlx5', so is this problem only valid for 'mlx5'
and can it be solved within PMD implementation?
> To this end, extend the rte_eth_dev_data structure to include the port ID
> of the backing device for representors.
>
> Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> ---
> The new field is added into the hole in rte_eth_dev_data structure.
> The patch does not change ABI, but extra care is required since ABI
> check is disabled for the structure because of the libabigail bug [1].
>
> Potentially it is bad for out-of-tree drivers which implement
> representors but do not fill in a new parert_port_id field in
> rte_eth_dev_data structure. Do we care?
>
> mlx5 changes should be reviwed by maintainers very carefully, since
> we are not sure if we patch it correctly.
>
> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
>
> v4:
> - apply mlx5 review notes: remove fallback from generic ethdev
> code and add fallback to mlx5 code to handle legacy usecase
>
> v3:
> - fix mlx5 build breakage
>
> v2:
> - fix mlx5 review notes
> - try device port ID first before parent in order to address
> backward compatibility issue
>
<...>
> index edf96de2dc..72fefa59c2 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -185,6 +185,12 @@ struct rte_eth_dev_data {
> /**< Switch-specific identifier.
> * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
> */
> + uint16_t parent_port_id;
Why 'parent'? Isn't this for the port that port representator represents, does
it called 'parent'? Above that device mentioned as 'backing device' a few times,
so would something like 'peer_port_id' better?
> + /**< Port ID of the backing device.
> + * This device will be used to query representor
> + * info and calculate representor IDs.
> + * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
> + */
>
Overall I am not feeling good about adding representor port related information
withing the device data struct.
I wonder if this information can be kept in the device private data.
Or, it is hard to explain but can we use something like inheritance, a
representor specific dev_data derived from original dev_data. We can store
dev_data pointers in 'struct rte_eth_dev' but can cast it to representor
specific dev_data when type is representor.
struct rte_eth_dev_data_rep
struct rte_eth_dev_data
<representor specific fields>
This brings lots of complexity though, specially in allocating/freeing this
struct, not sure if it worth to the effort.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1] bbdev: remove experimental tag from API
@ 2021-09-01 14:54 3% ` Chautru, Nicolas
0 siblings, 0 replies; 200+ results
From: Chautru, Nicolas @ 2021-09-01 14:54 UTC (permalink / raw)
To: Kinsella, Ray, Hemant Agrawal, David Marchand
Cc: dev, Akhil Goyal, Thomas Monjalon, Tom Rix, Zhang, Mingshan,
Joshi, Arun, Maxime Coquelin
> -----Original Message-----
> From: Kinsella, Ray <mdr@ashroe.eu>
> Sent: Wednesday, September 1, 2021 4:15 AM
> To: Hemant Agrawal <hemant.agrawal@nxp.com>; David Marchand
> <david.marchand@redhat.com>; Chautru, Nicolas
> <nicolas.chautru@intel.com>
> Cc: dev <dev@dpdk.org>; Akhil Goyal <gakhil@marvell.com>; Thomas
> Monjalon <thomas@monjalon.net>; Tom Rix <trix@redhat.com>; Zhang,
> Mingshan <mingshan.zhang@intel.com>; Joshi, Arun
> <arun.joshi@intel.com>; Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: Re: [PATCH v1] bbdev: remove experimental tag from API
>
>
>
> On 01/09/2021 04:52, Hemant Agrawal wrote:
> >>
> >> On Tue, Aug 31, 2021 at 6:27 PM Nicolas Chautru
> >> <nicolas.chautru@intel.com> wrote:
> >>>
> >>> This was previously suggested last year
> >>>
> >>
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpat
> >> c
> >>> hes.dpdk.org%2Fproject%2Fdpdk%2Fpatch%2F1593213242-157394-2-
> git-
> >> send-e
> >>> mail-
> >>
> nicolas.chautru%40intel.com%2F&data=04%7C01%7Chemant.agrawal%
> >>>
> >>
> 40nxp.com%7Cfcdfb339ec284057db1508d96c9dd048%7C686ea1d3bc2b4c6f
> >> a92cd99
> >>>
> >> c5c301635%7C0%7C0%7C637660247324212288%7CUnknown%7CTWFpbGZ
> >> sb3d8eyJWIjo
> >>>
> >>
> iMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1
> >> 000&
> >>>
> ;sdata=%2FYwaT29I4XlUxedcdCeyvM5z1R5QeNGTLLPY3XQBKjU%3D&
> >> ;reserved=0
> >>> but there was request from community to wait another year to confirm
> >> formally this api is mature.
> >>
> >> The request was to wait for a new vendor (i.e. non Intel) to
> >> implement a bbdev driver for their hw.
> >>
> >> NXP proposed a driver for this class:
> >>
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpat
> >> ch
> work.dpdk.org%2Fproject%2Fdpdk%2Flist%2F%3Fseries%3D16642%26state
> >>
> %3D%252A%26archive%3Dboth&data=04%7C01%7Chemant.agrawal%
> >>
> 40nxp.com%7Cfcdfb339ec284057db1508d96c9dd048%7C686ea1d3bc2b4c6f
> >> a92cd99c5c301635%7C0%7C0%7C637660247324212288%7CUnknown%7CT
> >>
> WFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLC
> >>
> JXVCI6Mn0%3D%7C1000&sdata=iorsQ7wAXyU2%2FaqeV5CrDp9asl8ztZ
> >> nSrbDuJpCv9Gk%3D&reserved=0
> >>
> >> What is the status of this driver?
> >
> > [Hemant] We are in process of resending the patches in next few days.
>
> Ok - just so we are clear - the net-net of that, is that the bbdev APIs will be
> 'experimental'
> for another year, right?
There is little reason to defer further I believe. Henant please confirm you are not looking at any ABI change based on discussion for the last few months.
>
> >
> >> I see no update on the bbdev API, which seems good, but a
> >> confirmation is needed.
Indeed that's all we need. Confirmation no change is require for new PMD.
> >>
> >>
> >> --
> >> David Marchand
> >
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v1 1/3] net/ixgbe: promote some API to stable
2021-09-01 11:13 3% ` Wang, Haiyue
@ 2021-09-01 13:55 0% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-09-01 13:55 UTC (permalink / raw)
To: Wang, Haiyue, Yigit, Ferruh, dev; +Cc: thomas, David Marchand
On 01/09/2021 12:13, Wang, Haiyue wrote:
>> -----Original Message-----
>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>> Sent: Wednesday, September 1, 2021 17:02
>> To: Wang, Haiyue <haiyue.wang@intel.com>; dev@dpdk.org
>> Cc: mdr@ashroe.eu; thomas@monjalon.net
>> Subject: Re: [dpdk-dev] [PATCH v1 1/3] net/ixgbe: promote some API to stable
>>
>> On 9/1/2021 6:07 AM, Haiyue Wang wrote:
>>> The DPDK Symbol Bot reports:
>>> Please note the symbols listed below have expired. In line with the
>>> DPDK ABI policy, they should be scheduled for removal, in the next
>>> DPDK release.
>>>
>>> Symbol
>>> rte_pmd_ixgbe_mdio_lock
>>> rte_pmd_ixgbe_mdio_unlock
>>> rte_pmd_ixgbe_mdio_unlocked_read
>>> rte_pmd_ixgbe_mdio_unlocked_write
>>> rte_pmd_ixgbe_upd_fctrl_sbp
>>
>> I wonder if we should keep PMD specific APIs as experimental (Not talking about
>> mbuf 'dynfield' / 'dynflag' APIs, we can promote them).
>
> Yes, makes sense.
>
>>
>> If an application is using PMD specific API, not sure if it will concern about
>> PMD specific APIs.
>> And keeping PMD specific APIs lets us remove them as soon as we can, also adds
>> additional discourage for users to use them.
>
> Can update this to DPDK ABI Policy, section 3.5.3.
> https://doc.dpdk.org/guides/contributing/abi_policy.html
I understand and agree.
However we never made any exceptions for PMD specific APIs in the policy.
Leave them as experimental for the moment.
I will add a clause to the policy ....
Thomas / David - any opinion?
>
>>
>>>
>>> Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
>>
>>
>> <...>
>>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 3/3] ethdev: promote burst mode API to stable
2021-09-01 9:07 0% ` Ferruh Yigit
@ 2021-09-01 13:49 0% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-09-01 13:49 UTC (permalink / raw)
To: Ferruh Yigit, Haiyue Wang, dev; +Cc: thomas, Andrew Rybchenko
On 01/09/2021 10:07, Ferruh Yigit wrote:
> On 9/1/2021 6:07 AM, Haiyue Wang wrote:
>> The DPDK Symbol Bot reports:
>> Please note the symbols listed below have expired. In line with the
>> DPDK ABI policy, they should be scheduled for removal, in the next
>> DPDK release.
>>
>> Symbol
>> rte_eth_rx_burst_mode_get
>> rte_eth_tx_burst_mode_get
>>
>> Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
>
> Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 1/6] bbdev: add capability for CRC16 check
2021-08-19 21:10 4% ` [dpdk-dev] [PATCH v2 1/6] bbdev: add capability for CRC16 check Nicolas Chautru
@ 2021-09-01 13:36 3% ` Tom Rix
2021-09-01 15:00 3% ` Chautru, Nicolas
0 siblings, 1 reply; 200+ results
From: Tom Rix @ 2021-09-01 13:36 UTC (permalink / raw)
To: Nicolas Chautru, dev, gakhil
Cc: thomas, hemant.agrawal, mingshan.zhang, arun.joshi
On 8/19/21 2:10 PM, Nicolas Chautru wrote:
> Adding a missing operation when CRC16
> is being used for TB CRC check.
>
> Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> ---
> app/test-bbdev/test_bbdev_vector.c | 2 ++
> doc/guides/prog_guide/bbdev.rst | 3 +++
> doc/guides/rel_notes/release_21_11.rst | 1 +
> lib/bbdev/rte_bbdev_op.h | 34 ++++++++++++++++++----------------
> 4 files changed, 24 insertions(+), 16 deletions(-)
>
> diff --git a/app/test-bbdev/test_bbdev_vector.c b/app/test-bbdev/test_bbdev_vector.c
> index 614dbd1..8d796b1 100644
> --- a/app/test-bbdev/test_bbdev_vector.c
> +++ b/app/test-bbdev/test_bbdev_vector.c
> @@ -167,6 +167,8 @@
> *op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK;
> else if (!strcmp(token, "RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP"))
> *op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP;
> + else if (!strcmp(token, "RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK"))
> + *op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK;
> else if (!strcmp(token, "RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS"))
> *op_flag_value = RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS;
> else if (!strcmp(token, "RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE"))
> diff --git a/doc/guides/prog_guide/bbdev.rst b/doc/guides/prog_guide/bbdev.rst
> index 9619280..8bd7cba 100644
> --- a/doc/guides/prog_guide/bbdev.rst
> +++ b/doc/guides/prog_guide/bbdev.rst
> @@ -891,6 +891,9 @@ given below.
> |RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP |
> | Set to drop the last CRC bits decoding output |
> +--------------------------------------------------------------------+
> +|RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK |
> +| Set for code block CRC-16 checking |
> ++--------------------------------------------------------------------+
> |RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS |
> | Set for bit-level de-interleaver bypass on input stream |
> +--------------------------------------------------------------------+
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index d707a55..69dd518 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -84,6 +84,7 @@ API Changes
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* bbdev: Added capability related to more comprehensive CRC options.
>
> ABI Changes
> -----------
> diff --git a/lib/bbdev/rte_bbdev_op.h b/lib/bbdev/rte_bbdev_op.h
> index f946842..7c44ddd 100644
> --- a/lib/bbdev/rte_bbdev_op.h
> +++ b/lib/bbdev/rte_bbdev_op.h
> @@ -142,51 +142,53 @@ enum rte_bbdev_op_ldpcdec_flag_bitmasks {
> RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK = (1ULL << 1),
> /** Set to drop the last CRC bits decoding output */
> RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP = (1ULL << 2),
> + /** Set for transport block CRC-16 checking */
> + RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK = (1ULL << 3),
Changing these enums will break the abi backwards.
Why not add the new one at the end ?
Tom
> /** Set for bit-level de-interleaver bypass on Rx stream. */
> - RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS = (1ULL << 3),
> + RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS = (1ULL << 4),
> /** Set for HARQ combined input stream enable. */
> - RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE = (1ULL << 4),
> + RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE = (1ULL << 5),
> /** Set for HARQ combined output stream enable. */
> - RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE = (1ULL << 5),
> + RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE = (1ULL << 6),
> /** Set for LDPC decoder bypass.
> * RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE must be set.
> */
> - RTE_BBDEV_LDPC_DECODE_BYPASS = (1ULL << 6),
> + RTE_BBDEV_LDPC_DECODE_BYPASS = (1ULL << 7),
> /** Set for soft-output stream enable */
> - RTE_BBDEV_LDPC_SOFT_OUT_ENABLE = (1ULL << 7),
> + RTE_BBDEV_LDPC_SOFT_OUT_ENABLE = (1ULL << 8),
> /** Set for Rate-Matching bypass on soft-out stream. */
> - RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS = (1ULL << 8),
> + RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS = (1ULL << 9),
> /** Set for bit-level de-interleaver bypass on soft-output stream. */
> - RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS = (1ULL << 9),
> + RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS = (1ULL << 10),
> /** Set for iteration stopping on successful decode condition
> * i.e. a successful syndrome check.
> */
> - RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE = (1ULL << 10),
> + RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE = (1ULL << 11),
> /** Set if a device supports decoder dequeue interrupts. */
> - RTE_BBDEV_LDPC_DEC_INTERRUPTS = (1ULL << 11),
> + RTE_BBDEV_LDPC_DEC_INTERRUPTS = (1ULL << 12),
> /** Set if a device supports scatter-gather functionality. */
> - RTE_BBDEV_LDPC_DEC_SCATTER_GATHER = (1ULL << 12),
> + RTE_BBDEV_LDPC_DEC_SCATTER_GATHER = (1ULL << 13),
> /** Set if a device supports input/output HARQ compression. */
> - RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION = (1ULL << 13),
> + RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION = (1ULL << 14),
> /** Set if a device supports input LLR compression. */
> - RTE_BBDEV_LDPC_LLR_COMPRESSION = (1ULL << 14),
> + RTE_BBDEV_LDPC_LLR_COMPRESSION = (1ULL << 15),
> /** Set if a device supports HARQ input from
> * device's internal memory.
> */
> - RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE = (1ULL << 15),
> + RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE = (1ULL << 16),
> /** Set if a device supports HARQ output to
> * device's internal memory.
> */
> - RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE = (1ULL << 16),
> + RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE = (1ULL << 17),
> /** Set if a device supports loop-back access to
> * HARQ internal memory. Intended for troubleshooting.
> */
> - RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK = (1ULL << 17),
> + RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK = (1ULL << 18),
> /** Set if a device includes LLR filler bits in the circular buffer
> * for HARQ memory. If not set, it is assumed the filler bits are not
> * in HARQ memory and handled directly by the LDPC decoder.
> */
> - RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS = (1ULL << 18)
> + RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS = (1ULL << 19)
> };
>
> /** Flags for LDPC encoder operation and capability structure */
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] doc: announce change in vfio dma mapping
2021-09-01 11:42 4% ` Ferruh Yigit
@ 2021-09-01 13:25 4% ` Burakov, Anatoly
2021-09-02 9:50 3% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Burakov, Anatoly @ 2021-09-01 13:25 UTC (permalink / raw)
To: Ferruh Yigit, Ding, Xuan, dev
Cc: maxime.coquelin, Xia, Chenbo, Hu, Jiayu, Richardson, Bruce,
Thomas Monjalon, Ray Kinsella
On 01-Sep-21 12:42 PM, Ferruh Yigit wrote:
> On 9/1/2021 12:01 PM, Burakov, Anatoly wrote:
>> On 01-Sep-21 10:56 AM, Ferruh Yigit wrote:
>>> On 9/1/2021 2:41 AM, Ding, Xuan wrote:
>>>> Hi Ferruh,
>>>>
>>>>> -----Original Message-----
>>>>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>>>>> Sent: Wednesday, September 1, 2021 12:01 AM
>>>>> To: Ding, Xuan <xuan.ding@intel.com>; dev@dpdk.org; Burakov, Anatoly
>>>>> <anatoly.burakov@intel.com>
>>>>> Cc: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>; Hu,
>>>>> Jiayu <jiayu.hu@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>
>>>>> Subject: Re: [PATCH] doc: announce change in vfio dma mapping
>>>>>
>>>>> On 8/31/2021 2:10 PM, Xuan Ding wrote:
>>>>>> Currently, the VFIO subsystem will compact adjacent DMA regions for the
>>>>>> purposes of saving space in the internal list of mappings. This has a
>>>>>> side effect of compacting two separate mappings that just happen to be
>>>>>> adjacent in memory. Since VFIO implementation on IA platforms also does
>>>>>> not allow partial unmapping of memory mapped for DMA, the current
>>>>> DPDK
>>>>>> VFIO implementation will prevent unmapping of accidentally adjacent
>>>>>> maps even though it could have been unmapped [1].
>>>>>>
>>>>>> The proper fix for this issue is to change the VFIO DMA mapping API to
>>>>>> also include page size, and always map memory page-by-page.
>>>>>>
>>>>>> [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
>>>>>>
>>>>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
>>>>>> ---
>>>>>> doc/guides/rel_notes/deprecation.rst | 3 +++
>>>>>> 1 file changed, 3 insertions(+)
>>>>>>
>>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>>> index 76a4abfd6b..1234420caf 100644
>>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>>> @@ -287,3 +287,6 @@ Deprecation Notices
>>>>>> reserved bytes to 2 (from 3), and use 1 byte to indicate warnings and
>>>>> other
>>>>>> information from the crypto/security operation. This field will be used to
>>>>>> communicate events such as soft expiry with IPsec in lookaside mode.
>>>>>> +
>>>>>> +* vfio: the functions `rte_vfio_container_dma_map` will be amended to
>>>>>> + include page size. This change is targeted for DPDK 22.02.
>>>>>>
>>>>>
>>>>> Is this means adding a new parameter to API?
>>>>> If so this is an ABI/API break and we can't do this change in the 22.02.
>>>>
>>>> Our original plan is add a new parameter in order not to use a new function
>>>> name, so you mean, any changes to the API can only be done in the LTS version?
>>>> If so, we can only add a new API and retire the old one in 22.11.
>>>>
>>>
>>> We can add a new API anytime. Adding new parameter to an existing API can be
>>> done on the ABI break release.
>>
>> So, 22.11 then?
>>
>
> Yes.
>
>>>
>>> You can add the new API in this release, and start using it.
>>> And mark the old API as deprecated in this release. This lets existing binaries
>>> to keep using it, but app needs to switch to new API for compilation.
>>> Old API can be removed on 22.11, and you will need a deprecation notice before
>>> 22.11 for it.
>>>
>>> Is above plan works for you?
>>>
>>
>> We have slightly rethought our approach, and the functionality that Xuan
>> requires does not rely on this API. They can, for all intents and purposes, be
>> considered unrelated issues.
>>
>> I still think it's a good idea to update the API that way, so I would like to do
>> that, and if we have to wait until 22.11 to fix it, I'm OK with that. Since
>> there no longer is any urgency here, it's acceptable to wait for the next LTS to
>> break it.
>>
>
> Got it.
>
> As far as I understand, main motivation in techboard decision was to prevent the
> ABI break as much as possible (main reason of decision wasn't deprecation notice
> being late). But if the correct thing to do is to rename the API (and break the
> ABI), I don't see the benefit to wait one more year, it is just delaying the
> impact and adding overhead to us.
> I am for being pragmatic and doing the change in this release if API rename is
> better option, perhaps we can visit the issue again in techboard.
>
> Can you please describe why renaming API is better option, comparing to adding
> new API with new parameter?
I take it you meant "why renaming API *isn't* a better option".
The problem we're solving is that the API in question does not know
about page sizes and thus can't map segments page-by-page. I mean I
/guess/ we could have two API's (one paged, one not paged), but then we
get into all kinds of hairy things about the API leaking the details of
underlying platform.
Bottom line: i like current API function name. It's concise, it's
descriptive. It's only missing a parameter, which i would like to add. A
rename has been suggested (deprecate old API, add new API with a
different name, and with added parameter), but honestly, I don't see why
we have to do that because this is predicated upon the assumption that
we *can't* break ABI at all, under any circumstances.
Can you please explain to me what is wrong with keeping a versioned
symbol? Like, keep the old function around to keep ABI compatibility,
but break the API compatibility for those who target 22.02 or later?
That's what symbol versioning is *for*, is it not?
--
Thanks,
Anatoly
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v10 2/3] devtools: script to send notifications of expired symbols
2021-08-31 14:50 5% ` [dpdk-dev] [PATCH v10 2/3] devtools: script to send notifications of expired symbols Ray Kinsella
2021-09-01 12:46 0% ` Aaron Conole
@ 2021-09-01 13:01 5% ` David Marchand
2021-09-03 13:28 0% ` Kinsella, Ray
1 sibling, 1 reply; 200+ results
From: David Marchand @ 2021-09-01 13:01 UTC (permalink / raw)
To: Ray Kinsella
Cc: dev, Bruce Richardson, Stephen Hemminger, Yigit, Ferruh,
Thomas Monjalon, Kevin Traynor, Aaron Conole
Hello Ray,
On Tue, Aug 31, 2021 at 4:51 PM Ray Kinsella <mdr@ashroe.eu> wrote:
>
> Use this script with the output of the DPDK symbol tool, to notify
> maintainers of expired symbols by email. You need to define the environment
> variable DPDK_GETMAINTAINER_PATH for this tool to work.
>
> Use terminal output to review the emails before sending.
Two comments:
- there are references of a previous name for the script,
%s/notify_expired_symbols.py/notify-symbol-maintainers.py/g
- and a reminder for the empty report that we received yesterday.
I think this can be reproduced with:
$ DPDK_GETMAINTAINER_PATH=devtools/get_maintainer.pl
devtools/notify-symbol-maintainers.py --format-output terminal <<EOF
> mapfile,expired (v21.08,v19.11),contributor name,contributor email
> lib/rib,rte_rib6_get_ip,Stephen Hemminger,stephen@networkplumber.org
> EOF
To:Ray Kinsella <mdr@ashroe.eu>, Thomas Monjalon <thomas@monjalon.net>
Reply-To:no-reply@dpdk.org
Subject:Expired symbols in
Body:Hi there,
Please note the symbols listed below have expired. In line with the DPDK ABI
policy, they should be scheduled for removal, in the next DPDK release.
For more information, please see the DPDK ABI Policy, section 3.5.3.
https://doc.dpdk.org/guides/contributing/abi_policy.html
Thanks,
The DPDK Symbol Bot
Symbol Contributor
Email
--------------------------------------------------------------------------------
^^^^
Here, empty report.
To:Vladimir Medvedkin <vladimir.medvedkin@intel.com>, stephen@networkplumber.org
Reply-To:no-reply@dpdk.org
CC:Ray Kinsella <mdr@ashroe.eu>, Thomas Monjalon <thomas@monjalon.net>
Subject:Expired symbols in lib/rib
Body:Hi there,
Please note the symbols listed below have expired. In line with the DPDK ABI
policy, they should be scheduled for removal, in the next DPDK release.
For more information, please see the DPDK ABI Policy, section 3.5.3.
https://doc.dpdk.org/guides/contributing/abi_policy.html
Thanks,
The DPDK Symbol Bot
Symbol Contributor
Email
rte_rib6_get_ip Stephen Hemminger
stephen@networkplumber.org
--------------------------------------------------------------------------------
--
David Marchand
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [PATCH v10 2/3] devtools: script to send notifications of expired symbols
2021-08-31 14:50 5% ` [dpdk-dev] [PATCH v10 2/3] devtools: script to send notifications of expired symbols Ray Kinsella
@ 2021-09-01 12:46 0% ` Aaron Conole
2021-09-03 11:15 0% ` Kinsella, Ray
2021-09-03 13:32 0% ` Kinsella, Ray
2021-09-01 13:01 5% ` David Marchand
1 sibling, 2 replies; 200+ results
From: Aaron Conole @ 2021-09-01 12:46 UTC (permalink / raw)
To: Ray Kinsella
Cc: dev, bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor
Ray Kinsella <mdr@ashroe.eu> writes:
> Use this script with the output of the DPDK symbol tool, to notify
> maintainers of expired symbols by email. You need to define the environment
> variable DPDK_GETMAINTAINER_PATH for this tool to work.
>
> Use terminal output to review the emails before sending.
> e.g.
> $ devtools/symbol-tool.py list-expired --format-output csv \
> | DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
> devtools/notify_expired_symbols.py --format-output terminal
>
> Then use email output to send the emails to the maintainers.
> e.g.
> $ devtools/symbol-tool.py list-expired --format-output csv \
> | DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
> devtools/notify_expired_symbols.py --format-output email \
> --smtp-server <server> --sender <someone@somewhere.com> \
> --password <password> --cc <someone@somewhere.com>
>
> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
> ---
> devtools/notify-symbol-maintainers.py | 256 ++++++++++++++++++++++++++
> 1 file changed, 256 insertions(+)
> create mode 100755 devtools/notify-symbol-maintainers.py
>
> diff --git a/devtools/notify-symbol-maintainers.py b/devtools/notify-symbol-maintainers.py
> new file mode 100755
> index 0000000000..ee554687ff
> --- /dev/null
> +++ b/devtools/notify-symbol-maintainers.py
> @@ -0,0 +1,256 @@
> +#!/usr/bin/env python3
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2021 Intel Corporation
> +# pylint: disable=invalid-name
> +'''Tool to notify maintainers of expired symbols'''
> +import smtplib
> +import ssl
> +import sys
> +import subprocess
> +import argparse
> +from argparse import RawTextHelpFormatter
> +import time
> +from email.message import EmailMessage
> +
> +DESCRIPTION = '''
> +Use this script with the output of the DPDK symbol tool, to notify maintainers
> +and contributors of expired symbols by email. You need to define the environment
> +variable DPDK_GETMAINTAINER_PATH for this tool to work.
> +
> +Use terminal output to review the emails before sending.
> +e.g.
> +$ devtools/symbol-tool.py list-expired --format-output csv \\
> +| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
> +{s} --format-output terminal
> +
> +Then use email output to send the emails to the maintainers.
> +e.g.
> +$ devtools/symbol-tool.py list-expired --format-output csv \\
> +| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
> +{s} --format-output email \\
> +--smtp-server <server> --sender <someone@somewhere.com> --password <password> \\
> +--cc <someone@somewhere.com>
> +'''
> +
> +EMAIL_TEMPLATE = '''Hi there,
> +
> +Please note the symbols listed below have expired. In line with the DPDK ABI
> +policy, they should be scheduled for removal, in the next DPDK release.
> +
> +For more information, please see the DPDK ABI Policy, section 3.5.3.
> +https://doc.dpdk.org/guides/contributing/abi_policy.html
> +
> +Thanks,
> +
> +The DPDK Symbol Bot
> +
> +'''
> +
> +ABI_POLICY = 'doc/guides/contributing/abi_policy.rst'
> +MAINTAINERS = 'MAINTAINERS'
> +get_maintainer = ['devtools/get-maintainer.sh', \
> + '--email', '-f']
Maybe it's best to make this something that can be overridden. There's
a series to change the .sh files to .py files. Perhaps an environment
variable or argument?
> +def _get_maintainers(libpath):
> + '''Get the maintainers for given library'''
> + try:
> + cmd = get_maintainer + [libpath]
> + result = subprocess.run(cmd, \
> + stdout=subprocess.PIPE, \
> + stderr=subprocess.PIPE,
> + check=True)
> + except subprocess.CalledProcessError:
> + return None
You might consider handling
except FileNotFoundError:
....
With a graceful exit and error message. In case the get_maintainers
path changes.
> + if result is None:
> + return None
> +
> + email = result.stdout.decode('utf-8')
> + if email == '':
> + return None
> +
> + email = list(filter(None,email.split('\n')))
> + return email
> +
> +default_maintainers = _get_maintainers(ABI_POLICY) + \
> + _get_maintainers(MAINTAINERS)
> +
> +def get_maintainers(libpath):
> + '''Get the maintainers for given library'''
> + maintainers=_get_maintainers(libpath)
> +
> + if maintainers is None:
> + maintainers = default_maintainers
> +
> + return maintainers
> +
> +def get_message(library, symbols, config):
> + '''Build email message from symbols, config and maintainers'''
> + contributors = {}
> + message = {}
> + maintainers = get_maintainers(library)
> +
> + if maintainers != default_maintainers:
> + message['CC'] = default_maintainers.copy()
> +
> + if 'CC' in config:
> + message.setdefault('CC',[]).append(config['CC'])
> +
> + message['Subject'] = 'Expired symbols in {}\n'.format(library)
> +
> + body = EMAIL_TEMPLATE
> + body += '{:<50}{:<25}{:<25}\n'.format('Symbol','Contributor','Email')
> + for sym in symbols:
> + body += ('{:<50}{:<25}{:<25}\n'.format(sym,\
> + symbols[sym]['name'],
> + symbols[sym]['email'],
> + ))
> + email = symbols[sym]['email']
> + contributors[email] = ''
> +
> + contributors = list(contributors.keys())
> +
> + message['To'] = maintainers + contributors
> + message['Body'] = body
> +
> + return message
> +
> +class OutputEmail():
> + '''Format the output for email'''
> + def __init__(self, config):
> + self.config = config
> +
> + self.terminal = OutputTerminal(config)
> + context = ssl.create_default_context()
> +
> + # Try to log in to server and send email
> + try:
> + self.server = smtplib.SMTP(config['smtp_server'], 587)
> + self.server.starttls(context=context) # Secure the connection
> + self.server.login(config['sender'], config['password'])
> + except Exception as exception:
> + print(exception)
> + raise exception
> +
> + def message(self,message):
> + '''send email'''
> + self.terminal.message(message)
> +
> + msg = EmailMessage()
> + msg.set_content(message.pop('Body'))
> +
> + for key in message.keys():
> + msg[key] = message[key]
> +
> + msg['From'] = self.config['sender']
> + msg['Reply-To'] = 'no-reply@dpdk.org'
> +
> + self.server.send_message(msg)
> +
> + time.sleep(1)
Why this sleep is needed?
> +
> + def __del__(self):
> + self.server.quit()
> +
> +class OutputTerminal(): # pylint: disable=too-few-public-methods
> + '''Format the output for the terminal'''
> + def __init__(self, config):
> + self.config = config
> +
> + def message(self,message):
> + '''Print email to terminal'''
> +
> + terminal = 'To:' + ', '.join(message['To']) + '\n'
> + if 'sender' in self.config.keys():
> + terminal += 'From:' + self.config['sender'] + '\n'
> +
> + terminal += 'Reply-To:' + 'no-reply@dpdk.org' + '\n'
> +
> + if 'CC' in message:
> + terminal += 'CC:' + ', '.join(message['CC']) + '\n'
> +
> + terminal += 'Subject:' + message['Subject'] + '\n'
> + terminal += 'Body:' + message['Body'] + '\n'
> +
> + print(terminal)
> + print('-' * 80)
> +
> +def parse_config(args):
> + '''put the command line args in the right places'''
> + config = {}
> + error_msg = None
> +
> + outputs = {
> + None : OutputTerminal,
> + 'terminal' : OutputTerminal,
> + 'email' : OutputEmail
> + }
> +
> + if args.format_output == 'email':
> + if args.smtp_server is None:
> + error_msg = 'SMTP server'
> + else:
> + config['smtp_server'] = args.smtp_server
> +
> + if args.sender is None:
> + error_msg = 'sender'
> + else:
> + config['sender'] = args.sender
> +
> + if args.password is None:
> + error_msg = 'password'
> + else:
> + config['password'] = args.password
> +
> + if args.cc is not None:
> + config['CC'] = args.cc
> +
> + if error_msg is not None:
> + print('Please specify a {} for email output'.format(error_msg))
> + return None
> +
> + config['output'] = outputs[args.format_output]
> + return config
> +
> +def main():
> + '''Main entry point'''
> + parser = argparse.ArgumentParser(description=DESCRIPTION.format(s=__file__), \
> + formatter_class=RawTextHelpFormatter)
> + parser.add_argument('--format-output', choices=['terminal','email'], \
> + default='terminal')
> + parser.add_argument('--smtp-server')
> + parser.add_argument('--password')
> + parser.add_argument('--sender')
> + parser.add_argument('--cc')
> +
> + args = parser.parse_args()
> + config = parse_config(args)
> + if config is None:
> + return
> +
> + symbols = {}
> + lastlib = library = ''
> +
> + output = config['output'](config)
> +
> + for line in sys.stdin:
> + line = line.rstrip('\n')
> +
> + if line.find('mapfile') >= 0:
> + continue
> + library, symbol, name, email = line.split(',')
> +
> + if library != lastlib:
> + message = get_message(lastlib, symbols, config)
> + output.message(message)
> + symbols = {}
> +
> + lastlib = library
> + symbols[symbol] = {'name' : name, 'email' : email}
> +
> + #print the last library
> + message = get_message(lastlib, symbols, config)
> + output.message(message)
> +
> +if __name__ == '__main__':
> + main()
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v10 0/3] devtools: scripts to count and track symbols
2021-08-31 14:50 3% ` [dpdk-dev] [PATCH v10 0/3] devtools: scripts to count and track symbols Ray Kinsella
` (2 preceding siblings ...)
2021-08-31 14:50 17% ` [dpdk-dev] [PATCH v10 3/3] maintainers: add new abi scripts Ray Kinsella
@ 2021-09-01 12:31 0% ` Aaron Conole
3 siblings, 0 replies; 200+ results
From: Aaron Conole @ 2021-09-01 12:31 UTC (permalink / raw)
To: Ray Kinsella
Cc: dev, bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor
Ray Kinsella <mdr@ashroe.eu> writes:
> Scripts to count and track the lifecycle of DPDK symbols.
>
> The symbol-tool script reports on the growth of symbols over releases
> and list expired symbols. The notify-symbol-maintainers script
> consumes the input from symbol-tool and generates email notifications
> of expired symbols.
>
> v2: reworked to fix pylint errors
> v3: sent with the correct in-reply-to
> v4: fix typos picked up by the CI
> v5: fix terminal_size & directory args
> v6: added list-expired, to list expired experimental symbols
> v7: fix typo in comments
> v8: added tool to notify maintainers of expired symbols
> v9: removed hardcoded emails addressed and script names
> v10: added ability to identify and notify the original contributors
>
> Ray Kinsella (3):
> devtools: script to track symbols over releases
> devtools: script to send notifications of expired symbols
> maintainers: add new abi scripts
>
> MAINTAINERS | 2 +
> devtools/notify-symbol-maintainers.py | 256 ++++++++++++++
> devtools/symbol-tool.py | 482 ++++++++++++++++++++++++++
> 3 files changed, 740 insertions(+)
> create mode 100755 devtools/notify-symbol-maintainers.py
> create mode 100755 devtools/symbol-tool.py
I get a whole mess of flake8 issues from this series (mostly 'backslash
is redundant' and whitespace issues). I'm using flake8 because it
pretty well enforces PEP8 style guide. I would like to see it
addressed, but also I see that many of the python files in the DPDK tree
don't actually pass. Example::
$ flake8 ./usertools/dpdk-devbind.py | wc -l
34
$ flake8 ./usertools/dpdk-devbind.py | sed 's@./usertools/dpdk-devbind.py[:0-9]* @@' | sort -u
E128 continuation line under-indented for visual indent
E302 expected 2 blank lines, found 1
E305 expected 2 blank lines after class or function definition, found 1
E501 line too long (105 > 79 characters)
E501 line too long (80 > 79 characters)
E501 line too long (82 > 79 characters)
E501 line too long (83 > 79 characters)
E501 line too long (84 > 79 characters)
E501 line too long (85 > 79 characters)
E501 line too long (86 > 79 characters)
E501 line too long (91 > 79 characters)
E502 the backslash is redundant between brackets
E722 do not use bare 'except'
Looks like we repeat the same kinds of errors everywhere (this is on
multiple tools). Some of our in-tree python is better than others (like
app/test/autotest.py which only has 1 flake).
Maybe we can address this. Other comments inline on the patches.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] doc: announce change in vfio dma mapping
2021-09-01 11:01 0% ` Burakov, Anatoly
@ 2021-09-01 11:42 4% ` Ferruh Yigit
2021-09-01 13:25 4% ` Burakov, Anatoly
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-09-01 11:42 UTC (permalink / raw)
To: Burakov, Anatoly, Ding, Xuan, dev
Cc: maxime.coquelin, Xia, Chenbo, Hu, Jiayu, Richardson, Bruce,
Thomas Monjalon, Ray Kinsella
On 9/1/2021 12:01 PM, Burakov, Anatoly wrote:
> On 01-Sep-21 10:56 AM, Ferruh Yigit wrote:
>> On 9/1/2021 2:41 AM, Ding, Xuan wrote:
>>> Hi Ferruh,
>>>
>>>> -----Original Message-----
>>>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>>>> Sent: Wednesday, September 1, 2021 12:01 AM
>>>> To: Ding, Xuan <xuan.ding@intel.com>; dev@dpdk.org; Burakov, Anatoly
>>>> <anatoly.burakov@intel.com>
>>>> Cc: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>; Hu,
>>>> Jiayu <jiayu.hu@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>
>>>> Subject: Re: [PATCH] doc: announce change in vfio dma mapping
>>>>
>>>> On 8/31/2021 2:10 PM, Xuan Ding wrote:
>>>>> Currently, the VFIO subsystem will compact adjacent DMA regions for the
>>>>> purposes of saving space in the internal list of mappings. This has a
>>>>> side effect of compacting two separate mappings that just happen to be
>>>>> adjacent in memory. Since VFIO implementation on IA platforms also does
>>>>> not allow partial unmapping of memory mapped for DMA, the current
>>>> DPDK
>>>>> VFIO implementation will prevent unmapping of accidentally adjacent
>>>>> maps even though it could have been unmapped [1].
>>>>>
>>>>> The proper fix for this issue is to change the VFIO DMA mapping API to
>>>>> also include page size, and always map memory page-by-page.
>>>>>
>>>>> [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
>>>>>
>>>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
>>>>> ---
>>>>> doc/guides/rel_notes/deprecation.rst | 3 +++
>>>>> 1 file changed, 3 insertions(+)
>>>>>
>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>> index 76a4abfd6b..1234420caf 100644
>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>> @@ -287,3 +287,6 @@ Deprecation Notices
>>>>> reserved bytes to 2 (from 3), and use 1 byte to indicate warnings and
>>>> other
>>>>> information from the crypto/security operation. This field will be used to
>>>>> communicate events such as soft expiry with IPsec in lookaside mode.
>>>>> +
>>>>> +* vfio: the functions `rte_vfio_container_dma_map` will be amended to
>>>>> + include page size. This change is targeted for DPDK 22.02.
>>>>>
>>>>
>>>> Is this means adding a new parameter to API?
>>>> If so this is an ABI/API break and we can't do this change in the 22.02.
>>>
>>> Our original plan is add a new parameter in order not to use a new function
>>> name, so you mean, any changes to the API can only be done in the LTS version?
>>> If so, we can only add a new API and retire the old one in 22.11.
>>>
>>
>> We can add a new API anytime. Adding new parameter to an existing API can be
>> done on the ABI break release.
>
> So, 22.11 then?
>
Yes.
>>
>> You can add the new API in this release, and start using it.
>> And mark the old API as deprecated in this release. This lets existing binaries
>> to keep using it, but app needs to switch to new API for compilation.
>> Old API can be removed on 22.11, and you will need a deprecation notice before
>> 22.11 for it.
>>
>> Is above plan works for you?
>>
>
> We have slightly rethought our approach, and the functionality that Xuan
> requires does not rely on this API. They can, for all intents and purposes, be
> considered unrelated issues.
>
> I still think it's a good idea to update the API that way, so I would like to do
> that, and if we have to wait until 22.11 to fix it, I'm OK with that. Since
> there no longer is any urgency here, it's acceptable to wait for the next LTS to
> break it.
>
Got it.
As far as I understand, main motivation in techboard decision was to prevent the
ABI break as much as possible (main reason of decision wasn't deprecation notice
being late). But if the correct thing to do is to rename the API (and break the
ABI), I don't see the benefit to wait one more year, it is just delaying the
impact and adding overhead to us.
I am for being pragmatic and doing the change in this release if API rename is
better option, perhaps we can visit the issue again in techboard.
Can you please describe why renaming API is better option, comparing to adding
new API with new parameter?
Thanks,
ferruh
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v1 1/3] net/ixgbe: promote some API to stable
2021-09-01 9:02 0% ` Ferruh Yigit
@ 2021-09-01 11:13 3% ` Wang, Haiyue
2021-09-01 13:55 0% ` Kinsella, Ray
0 siblings, 1 reply; 200+ results
From: Wang, Haiyue @ 2021-09-01 11:13 UTC (permalink / raw)
To: Yigit, Ferruh, dev; +Cc: mdr, thomas
> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Wednesday, September 1, 2021 17:02
> To: Wang, Haiyue <haiyue.wang@intel.com>; dev@dpdk.org
> Cc: mdr@ashroe.eu; thomas@monjalon.net
> Subject: Re: [dpdk-dev] [PATCH v1 1/3] net/ixgbe: promote some API to stable
>
> On 9/1/2021 6:07 AM, Haiyue Wang wrote:
> > The DPDK Symbol Bot reports:
> > Please note the symbols listed below have expired. In line with the
> > DPDK ABI policy, they should be scheduled for removal, in the next
> > DPDK release.
> >
> > Symbol
> > rte_pmd_ixgbe_mdio_lock
> > rte_pmd_ixgbe_mdio_unlock
> > rte_pmd_ixgbe_mdio_unlocked_read
> > rte_pmd_ixgbe_mdio_unlocked_write
> > rte_pmd_ixgbe_upd_fctrl_sbp
>
> I wonder if we should keep PMD specific APIs as experimental (Not talking about
> mbuf 'dynfield' / 'dynflag' APIs, we can promote them).
Yes, makes sense.
>
> If an application is using PMD specific API, not sure if it will concern about
> PMD specific APIs.
> And keeping PMD specific APIs lets us remove them as soon as we can, also adds
> additional discourage for users to use them.
Can update this to DPDK ABI Policy, section 3.5.3.
https://doc.dpdk.org/guides/contributing/abi_policy.html
>
> >
> > Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
>
>
> <...>
>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] doc: announce change in vfio dma mapping
2021-09-01 9:56 3% ` Ferruh Yigit
@ 2021-09-01 11:01 0% ` Burakov, Anatoly
2021-09-01 11:42 4% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Burakov, Anatoly @ 2021-09-01 11:01 UTC (permalink / raw)
To: Ferruh Yigit, Ding, Xuan, dev
Cc: maxime.coquelin, Xia, Chenbo, Hu, Jiayu, Richardson, Bruce
On 01-Sep-21 10:56 AM, Ferruh Yigit wrote:
> On 9/1/2021 2:41 AM, Ding, Xuan wrote:
>> Hi Ferruh,
>>
>>> -----Original Message-----
>>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>>> Sent: Wednesday, September 1, 2021 12:01 AM
>>> To: Ding, Xuan <xuan.ding@intel.com>; dev@dpdk.org; Burakov, Anatoly
>>> <anatoly.burakov@intel.com>
>>> Cc: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>; Hu,
>>> Jiayu <jiayu.hu@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>
>>> Subject: Re: [PATCH] doc: announce change in vfio dma mapping
>>>
>>> On 8/31/2021 2:10 PM, Xuan Ding wrote:
>>>> Currently, the VFIO subsystem will compact adjacent DMA regions for the
>>>> purposes of saving space in the internal list of mappings. This has a
>>>> side effect of compacting two separate mappings that just happen to be
>>>> adjacent in memory. Since VFIO implementation on IA platforms also does
>>>> not allow partial unmapping of memory mapped for DMA, the current
>>> DPDK
>>>> VFIO implementation will prevent unmapping of accidentally adjacent
>>>> maps even though it could have been unmapped [1].
>>>>
>>>> The proper fix for this issue is to change the VFIO DMA mapping API to
>>>> also include page size, and always map memory page-by-page.
>>>>
>>>> [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
>>>>
>>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
>>>> ---
>>>> doc/guides/rel_notes/deprecation.rst | 3 +++
>>>> 1 file changed, 3 insertions(+)
>>>>
>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>> b/doc/guides/rel_notes/deprecation.rst
>>>> index 76a4abfd6b..1234420caf 100644
>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>> @@ -287,3 +287,6 @@ Deprecation Notices
>>>> reserved bytes to 2 (from 3), and use 1 byte to indicate warnings and
>>> other
>>>> information from the crypto/security operation. This field will be used to
>>>> communicate events such as soft expiry with IPsec in lookaside mode.
>>>> +
>>>> +* vfio: the functions `rte_vfio_container_dma_map` will be amended to
>>>> + include page size. This change is targeted for DPDK 22.02.
>>>>
>>>
>>> Is this means adding a new parameter to API?
>>> If so this is an ABI/API break and we can't do this change in the 22.02.
>>
>> Our original plan is add a new parameter in order not to use a new function name, so you mean, any changes to the API can only be done in the LTS version?
>> If so, we can only add a new API and retire the old one in 22.11.
>>
>
> We can add a new API anytime. Adding new parameter to an existing API can be
> done on the ABI break release.
So, 22.11 then?
>
> You can add the new API in this release, and start using it.
> And mark the old API as deprecated in this release. This lets existing binaries
> to keep using it, but app needs to switch to new API for compilation.
> Old API can be removed on 22.11, and you will need a deprecation notice before
> 22.11 for it.
>
> Is above plan works for you?
>
We have slightly rethought our approach, and the functionality that Xuan
requires does not rely on this API. They can, for all intents and
purposes, be considered unrelated issues.
I still think it's a good idea to update the API that way, so I would
like to do that, and if we have to wait until 22.11 to fix it, I'm OK
with that. Since there no longer is any urgency here, it's acceptable to
wait for the next LTS to break it.
--
Thanks,
Anatoly
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] doc: announce change in vfio dma mapping
2021-09-01 1:41 0% ` Ding, Xuan
@ 2021-09-01 9:56 3% ` Ferruh Yigit
2021-09-01 11:01 0% ` Burakov, Anatoly
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-09-01 9:56 UTC (permalink / raw)
To: Ding, Xuan, dev, Burakov, Anatoly
Cc: maxime.coquelin, Xia, Chenbo, Hu, Jiayu, Richardson, Bruce
On 9/1/2021 2:41 AM, Ding, Xuan wrote:
> Hi Ferruh,
>
>> -----Original Message-----
>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>> Sent: Wednesday, September 1, 2021 12:01 AM
>> To: Ding, Xuan <xuan.ding@intel.com>; dev@dpdk.org; Burakov, Anatoly
>> <anatoly.burakov@intel.com>
>> Cc: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>; Hu,
>> Jiayu <jiayu.hu@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>
>> Subject: Re: [PATCH] doc: announce change in vfio dma mapping
>>
>> On 8/31/2021 2:10 PM, Xuan Ding wrote:
>>> Currently, the VFIO subsystem will compact adjacent DMA regions for the
>>> purposes of saving space in the internal list of mappings. This has a
>>> side effect of compacting two separate mappings that just happen to be
>>> adjacent in memory. Since VFIO implementation on IA platforms also does
>>> not allow partial unmapping of memory mapped for DMA, the current
>> DPDK
>>> VFIO implementation will prevent unmapping of accidentally adjacent
>>> maps even though it could have been unmapped [1].
>>>
>>> The proper fix for this issue is to change the VFIO DMA mapping API to
>>> also include page size, and always map memory page-by-page.
>>>
>>> [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
>>>
>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
>>> ---
>>> doc/guides/rel_notes/deprecation.rst | 3 +++
>>> 1 file changed, 3 insertions(+)
>>>
>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>> b/doc/guides/rel_notes/deprecation.rst
>>> index 76a4abfd6b..1234420caf 100644
>>> --- a/doc/guides/rel_notes/deprecation.rst
>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>> @@ -287,3 +287,6 @@ Deprecation Notices
>>> reserved bytes to 2 (from 3), and use 1 byte to indicate warnings and
>> other
>>> information from the crypto/security operation. This field will be used to
>>> communicate events such as soft expiry with IPsec in lookaside mode.
>>> +
>>> +* vfio: the functions `rte_vfio_container_dma_map` will be amended to
>>> + include page size. This change is targeted for DPDK 22.02.
>>>
>>
>> Is this means adding a new parameter to API?
>> If so this is an ABI/API break and we can't do this change in the 22.02.
>
> Our original plan is add a new parameter in order not to use a new function name, so you mean, any changes to the API can only be done in the LTS version?
> If so, we can only add a new API and retire the old one in 22.11.
>
We can add a new API anytime. Adding new parameter to an existing API can be
done on the ABI break release.
You can add the new API in this release, and start using it.
And mark the old API as deprecated in this release. This lets existing binaries
to keep using it, but app needs to switch to new API for compilation.
Old API can be removed on 22.11, and you will need a deprecation notice before
22.11 for it.
Is above plan works for you?
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v1 3/3] ethdev: promote burst mode API to stable
2021-09-01 5:07 3% ` [dpdk-dev] [PATCH v1 3/3] ethdev: promote burst mode " Haiyue Wang
@ 2021-09-01 9:07 0% ` Ferruh Yigit
2021-09-01 13:49 0% ` Kinsella, Ray
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-09-01 9:07 UTC (permalink / raw)
To: Haiyue Wang, dev; +Cc: mdr, thomas, Andrew Rybchenko
On 9/1/2021 6:07 AM, Haiyue Wang wrote:
> The DPDK Symbol Bot reports:
> Please note the symbols listed below have expired. In line with the
> DPDK ABI policy, they should be scheduled for removal, in the next
> DPDK release.
>
> Symbol
> rte_eth_rx_burst_mode_get
> rte_eth_tx_burst_mode_get
>
> Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 1/3] net/ixgbe: promote some API to stable
2021-09-01 5:07 3% ` [dpdk-dev] [PATCH v1 1/3] net/ixgbe: promote " Haiyue Wang
@ 2021-09-01 9:02 0% ` Ferruh Yigit
2021-09-01 11:13 3% ` Wang, Haiyue
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-09-01 9:02 UTC (permalink / raw)
To: Haiyue Wang, dev; +Cc: mdr, thomas
On 9/1/2021 6:07 AM, Haiyue Wang wrote:
> The DPDK Symbol Bot reports:
> Please note the symbols listed below have expired. In line with the
> DPDK ABI policy, they should be scheduled for removal, in the next
> DPDK release.
>
> Symbol
> rte_pmd_ixgbe_mdio_lock
> rte_pmd_ixgbe_mdio_unlock
> rte_pmd_ixgbe_mdio_unlocked_read
> rte_pmd_ixgbe_mdio_unlocked_write
> rte_pmd_ixgbe_upd_fctrl_sbp
I wonder if we should keep PMD specific APIs as experimental (Not talking about
mbuf 'dynfield' / 'dynflag' APIs, we can promote them).
If an application is using PMD specific API, not sure if it will concern about
PMD specific APIs.
And keeping PMD specific APIs lets us remove them as soon as we can, also adds
additional discourage for users to use them.
>
> Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
<...>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [EXT] Re: [RFC 11/15] eventdev: reserve fields in timer object
2021-08-24 15:10 3% ` Stephen Hemminger
@ 2021-09-01 6:48 0% ` Pavan Nikhilesh Bhagavatula
2021-09-07 21:02 0% ` Carrillo, Erik G
0 siblings, 1 reply; 200+ results
From: Pavan Nikhilesh Bhagavatula @ 2021-09-01 6:48 UTC (permalink / raw)
To: Stephen Hemminger, Erik Gabriel Carrillo
Cc: Jerin Jacob Kollanukkaran, konstantin.ananyev, dev
>On Tue, 24 Aug 2021 01:10:15 +0530
><pbhagavatula@marvell.com> wrote:
>
>> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>>
>> Reserve fields in rte_event_timer data structure to address future
>> use cases.
>> Also, remove volatile from rte_event_timer.
>>
>> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
>Reserve fields are not a good idea. They don't solve future API/ABI
>problems.
>
>The issue is that you need to zero them and check they are zero
>otherwise
>they can't safely be used later. This happened with the Linux kernel
>system calls where in several cases a flag field was added for future.
>The problem is that old programs would work with any garbage in the
>flag
>field, and therefore the flag could not be extended.
The change is in rte_event_timer which is a fastpath object similar to
rte_mbuf.
I think fast path objects don't have the above mentioned caveat.
>
>A better way to make structures internal opaque objects that
>can be resized. Why is rte_event_timer_adapter exposed in API?
rte_event_timer_adapter has similar API semantics of rte_mempool,
the objects of the adapter are rte_event_timer objects which have
information of the timer state.
Erik,
I think we should move the fields in current rte_event_timer structure
around, currently we have
struct rte_event_timer {
struct rte_event ev;
volatile enum rte_event_timer_state state;
x-------x 4 byte hole x---------x
uint64_t timeout_ticks;
uint64_t impl_opaque[2];
uint8_t user_meta[0];
} __rte_cache_aligned;
Move to
struct rte_event_timer {
struct rte_event ev;
uint64_t timeout_ticks;
uint64_t impl_opaque[2];
uint64_t rsvd;
enum rte_event_timer_state state;
uint8_t user_meta[0];
} __rte_cache_aligned;
Since its cache aligned, the size doesn't change.
Thoughts?
Thanks,
Pavan.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v1 3/3] ethdev: promote burst mode API to stable
2021-09-01 5:07 3% [dpdk-dev] [PATCH v1 0/3] Promote some API to stable Haiyue Wang
2021-09-01 5:07 3% ` [dpdk-dev] [PATCH v1 1/3] net/ixgbe: promote " Haiyue Wang
2021-09-01 5:07 3% ` [dpdk-dev] [PATCH v1 2/3] net/ice: " Haiyue Wang
@ 2021-09-01 5:07 3% ` Haiyue Wang
2021-09-01 9:07 0% ` Ferruh Yigit
2021-09-06 5:56 3% ` [dpdk-dev] [PATCH v2] " Haiyue Wang
3 siblings, 1 reply; 200+ results
From: Haiyue Wang @ 2021-09-01 5:07 UTC (permalink / raw)
To: dev; +Cc: mdr, thomas, Haiyue Wang, Ferruh Yigit, Andrew Rybchenko
The DPDK Symbol Bot reports:
Please note the symbols listed below have expired. In line with the
DPDK ABI policy, they should be scheduled for removal, in the next
DPDK release.
Symbol
rte_eth_rx_burst_mode_get
rte_eth_tx_burst_mode_get
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
---
lib/ethdev/rte_ethdev.h | 2 --
lib/ethdev/version.map | 4 ++--
2 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index d2b27c351f..3277e8f8fb 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4361,7 +4361,6 @@ int rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
* - -ENOTSUP: routine is not supported by the device PMD.
* - -EINVAL: The queue_id is out of range.
*/
-__rte_experimental
int rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
struct rte_eth_burst_mode *mode);
@@ -4383,7 +4382,6 @@ int rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
* - -ENOTSUP: routine is not supported by the device PMD.
* - -EINVAL: The queue_id is out of range.
*/
-__rte_experimental
int rte_eth_tx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
struct rte_eth_burst_mode *mode);
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 3eece75b72..6a12b0664a 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -87,6 +87,7 @@ DPDK_22 {
rte_eth_promiscuous_get;
rte_eth_remove_rx_callback;
rte_eth_remove_tx_callback;
+ rte_eth_rx_burst_mode_get;
rte_eth_rx_queue_info_get;
rte_eth_rx_queue_setup;
rte_eth_set_queue_rate_limit;
@@ -104,6 +105,7 @@ DPDK_22 {
rte_eth_tx_buffer_drop_callback;
rte_eth_tx_buffer_init;
rte_eth_tx_buffer_set_err_callback;
+ rte_eth_tx_burst_mode_get;
rte_eth_tx_done_cleanup;
rte_eth_tx_queue_info_get;
rte_eth_tx_queue_setup;
@@ -166,9 +168,7 @@ EXPERIMENTAL {
# added in 19.11
rte_eth_dev_hairpin_capability_get;
- rte_eth_rx_burst_mode_get;
rte_eth_rx_hairpin_queue_setup;
- rte_eth_tx_burst_mode_get;
rte_eth_tx_hairpin_queue_setup;
rte_flow_dynf_metadata_offs;
rte_flow_dynf_metadata_mask;
--
2.33.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v1 2/3] net/ice: promote some API to stable
2021-09-01 5:07 3% [dpdk-dev] [PATCH v1 0/3] Promote some API to stable Haiyue Wang
2021-09-01 5:07 3% ` [dpdk-dev] [PATCH v1 1/3] net/ixgbe: promote " Haiyue Wang
@ 2021-09-01 5:07 3% ` Haiyue Wang
2021-09-01 5:07 3% ` [dpdk-dev] [PATCH v1 3/3] ethdev: promote burst mode " Haiyue Wang
2021-09-06 5:56 3% ` [dpdk-dev] [PATCH v2] " Haiyue Wang
3 siblings, 0 replies; 200+ results
From: Haiyue Wang @ 2021-09-01 5:07 UTC (permalink / raw)
To: dev; +Cc: mdr, thomas, Haiyue Wang, Qiming Yang, Qi Zhang
The DPDK Symbol Bot reports:
Please note the symbols listed below have expired. In line with the
DPDK ABI policy, they should be scheduled for removal, in the next
DPDK release.
Symbol
rte_net_ice_dynfield_proto_xtr_metadata_offs
rte_net_ice_dynflag_proto_xtr_vlan_mask
rte_net_ice_dynflag_proto_xtr_ipv4_mask
rte_net_ice_dynflag_proto_xtr_ipv6_mask
rte_net_ice_dynflag_proto_xtr_ipv6_flow_mask
rte_net_ice_dynflag_proto_xtr_tcp_mask
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
---
drivers/net/ice/rte_pmd_ice.h | 3 ---
drivers/net/ice/version.map | 11 ++++-------
2 files changed, 4 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ice/rte_pmd_ice.h b/drivers/net/ice/rte_pmd_ice.h
index 9a436a140b..c0f19fcc89 100644
--- a/drivers/net/ice/rte_pmd_ice.h
+++ b/drivers/net/ice/rte_pmd_ice.h
@@ -149,7 +149,6 @@ extern uint64_t rte_net_ice_dynflag_proto_xtr_ip_offset_mask;
* @return
* True if registered, false otherwise.
*/
-__rte_experimental
static __rte_always_inline int
rte_net_ice_dynf_proto_xtr_metadata_avail(void)
{
@@ -164,7 +163,6 @@ rte_net_ice_dynf_proto_xtr_metadata_avail(void)
* @return
* The saved protocol extraction metadata.
*/
-__rte_experimental
static __rte_always_inline uint32_t
rte_net_ice_dynf_proto_xtr_metadata_get(struct rte_mbuf *m)
{
@@ -177,7 +175,6 @@ rte_net_ice_dynf_proto_xtr_metadata_get(struct rte_mbuf *m)
* @param m
* The pointer to the mbuf.
*/
-__rte_experimental
static inline void
rte_net_ice_dump_proto_xtr_metadata(struct rte_mbuf *m)
{
diff --git a/drivers/net/ice/version.map b/drivers/net/ice/version.map
index cc837f1c00..1a633fd95e 100644
--- a/drivers/net/ice/version.map
+++ b/drivers/net/ice/version.map
@@ -1,16 +1,13 @@
DPDK_22 {
- local: *;
-};
-
-EXPERIMENTAL {
global:
- # added in 19.11
rte_net_ice_dynfield_proto_xtr_metadata_offs;
- rte_net_ice_dynflag_proto_xtr_vlan_mask;
+ rte_net_ice_dynflag_proto_xtr_ip_offset_mask;
rte_net_ice_dynflag_proto_xtr_ipv4_mask;
rte_net_ice_dynflag_proto_xtr_ipv6_mask;
rte_net_ice_dynflag_proto_xtr_ipv6_flow_mask;
rte_net_ice_dynflag_proto_xtr_tcp_mask;
- rte_net_ice_dynflag_proto_xtr_ip_offset_mask;
+ rte_net_ice_dynflag_proto_xtr_vlan_mask;
+
+ local: *;
};
--
2.33.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v1 1/3] net/ixgbe: promote some API to stable
2021-09-01 5:07 3% [dpdk-dev] [PATCH v1 0/3] Promote some API to stable Haiyue Wang
@ 2021-09-01 5:07 3% ` Haiyue Wang
2021-09-01 9:02 0% ` Ferruh Yigit
2021-09-01 5:07 3% ` [dpdk-dev] [PATCH v1 2/3] net/ice: " Haiyue Wang
` (2 subsequent siblings)
3 siblings, 1 reply; 200+ results
From: Haiyue Wang @ 2021-09-01 5:07 UTC (permalink / raw)
To: dev; +Cc: mdr, thomas, Haiyue Wang
The DPDK Symbol Bot reports:
Please note the symbols listed below have expired. In line with the
DPDK ABI policy, they should be scheduled for removal, in the next
DPDK release.
Symbol
rte_pmd_ixgbe_mdio_lock
rte_pmd_ixgbe_mdio_unlock
rte_pmd_ixgbe_mdio_unlocked_read
rte_pmd_ixgbe_mdio_unlocked_write
rte_pmd_ixgbe_upd_fctrl_sbp
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
---
drivers/net/ixgbe/rte_pmd_ixgbe.h | 5 -----
drivers/net/ixgbe/version.map | 10 +++++-----
2 files changed, 5 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.h b/drivers/net/ixgbe/rte_pmd_ixgbe.h
index 90fc8160b1..7fcdbe7452 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.h
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.h
@@ -586,7 +586,6 @@ int rte_pmd_ixgbe_bypass_wd_reset(uint16_t port);
* - (-ENODEV) if *port* invalid.
* - (IXGBE_ERR_SWFW_SYNC) If sw/fw semaphore acquisition failed
*/
-__rte_experimental
int
rte_pmd_ixgbe_mdio_lock(uint16_t port);
@@ -600,7 +599,6 @@ rte_pmd_ixgbe_mdio_lock(uint16_t port);
* - (-ENOTSUP) if hardware doesn't support.
* - (-ENODEV) if *port* invalid.
*/
-__rte_experimental
int
rte_pmd_ixgbe_mdio_unlock(uint16_t port);
@@ -622,7 +620,6 @@ rte_pmd_ixgbe_mdio_unlock(uint16_t port);
* - (-ENODEV) if *port* invalid.
* - (IXGBE_ERR_PHY) If PHY read command failed
*/
-__rte_experimental
int
rte_pmd_ixgbe_mdio_unlocked_read(uint16_t port, uint32_t reg_addr,
uint32_t dev_type, uint16_t *phy_data);
@@ -646,7 +643,6 @@ rte_pmd_ixgbe_mdio_unlocked_read(uint16_t port, uint32_t reg_addr,
* - (-ENODEV) if *port* invalid.
* - (IXGBE_ERR_PHY) If PHY read command failed
*/
-__rte_experimental
int
rte_pmd_ixgbe_mdio_unlocked_write(uint16_t port, uint32_t reg_addr,
uint32_t dev_type, uint16_t phy_data);
@@ -725,7 +721,6 @@ enum {
* - (-ENODEV) if *port* invalid.
* - (-ENOTSUP) if hardware doesn't support this feature.
*/
-__rte_experimental
int
rte_pmd_ixgbe_upd_fctrl_sbp(uint16_t port, int enable);
diff --git a/drivers/net/ixgbe/version.map b/drivers/net/ixgbe/version.map
index bca5cc5826..f0f29d8749 100644
--- a/drivers/net/ixgbe/version.map
+++ b/drivers/net/ixgbe/version.map
@@ -16,6 +16,10 @@ DPDK_22 {
rte_pmd_ixgbe_macsec_enable;
rte_pmd_ixgbe_macsec_select_rxsa;
rte_pmd_ixgbe_macsec_select_txsa;
+ rte_pmd_ixgbe_mdio_lock;
+ rte_pmd_ixgbe_mdio_unlock;
+ rte_pmd_ixgbe_mdio_unlocked_read;
+ rte_pmd_ixgbe_mdio_unlocked_write;
rte_pmd_ixgbe_ping_vf;
rte_pmd_ixgbe_set_all_queues_drop_en;
rte_pmd_ixgbe_set_tc_bw_alloc;
@@ -31,6 +35,7 @@ DPDK_22 {
rte_pmd_ixgbe_set_vf_vlan_filter;
rte_pmd_ixgbe_set_vf_vlan_insert;
rte_pmd_ixgbe_set_vf_vlan_stripq;
+ rte_pmd_ixgbe_upd_fctrl_sbp;
local: *;
};
@@ -40,9 +45,4 @@ EXPERIMENTAL {
rte_pmd_ixgbe_get_fdir_info;
rte_pmd_ixgbe_get_fdir_stats;
- rte_pmd_ixgbe_mdio_lock;
- rte_pmd_ixgbe_mdio_unlock;
- rte_pmd_ixgbe_mdio_unlocked_read;
- rte_pmd_ixgbe_mdio_unlocked_write;
- rte_pmd_ixgbe_upd_fctrl_sbp;
};
--
2.33.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v1 0/3] Promote some API to stable
@ 2021-09-01 5:07 3% Haiyue Wang
2021-09-01 5:07 3% ` [dpdk-dev] [PATCH v1 1/3] net/ixgbe: promote " Haiyue Wang
` (3 more replies)
0 siblings, 4 replies; 200+ results
From: Haiyue Wang @ 2021-09-01 5:07 UTC (permalink / raw)
To: dev; +Cc: mdr, thomas, Haiyue Wang
According to DPDK ABI Policy, section 3.5.3.
https://doc.dpdk.org/guides/contributing/abi_policy.html
Promote some API reported by DPDK Symbol Bot to stable.
Haiyue Wang (3):
net/ixgbe: promote some API to stable
net/ice: promote some API to stable
ethdev: promote burst mode API to stable
drivers/net/ice/rte_pmd_ice.h | 3 ---
drivers/net/ice/version.map | 11 ++++-------
drivers/net/ixgbe/rte_pmd_ixgbe.h | 5 -----
drivers/net/ixgbe/version.map | 10 +++++-----
lib/ethdev/rte_ethdev.h | 2 --
lib/ethdev/version.map | 4 ++--
6 files changed, 11 insertions(+), 24 deletions(-)
--
2.33.0
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v4] ethdev: fix representor port ID search by name
2021-08-31 16:06 3% ` [dpdk-dev] [PATCH v4] " Andrew Rybchenko
2021-08-31 16:32 0% ` Wang, Haiyue
@ 2021-09-01 5:15 0% ` Xing, Beilei
2021-09-01 14:55 0% ` Ferruh Yigit
2 siblings, 0 replies; 200+ results
From: Xing, Beilei @ 2021-09-01 5:15 UTC (permalink / raw)
To: Andrew Rybchenko, Ajit Khaparde, Somnath Kotur, Daley, John,
Hyong Youb Kim, Yang, Qiming, Zhang, Qi Z, Wang, Haiyue,
Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko,
Thomas Monjalon, Yigit, Ferruh
Cc: dev, Viacheslav Galaktionov
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Wednesday, September 1, 2021 12:06 AM
> To: Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
> <somnath.kotur@broadcom.com>; Daley, John <johndale@cisco.com>;
> Hyong Youb Kim <hyonkim@cisco.com>; Xing, Beilei <beilei.xing@intel.com>;
> Yang, Qiming <qiming.yang@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>;
> Wang, Haiyue <haiyue.wang@intel.com>; Matan Azrad
> <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Viacheslav
> Ovsiienko <viacheslavo@nvidia.com>; Thomas Monjalon
> <thomas@monjalon.net>; Yigit, Ferruh <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org; Viacheslav Galaktionov
> <viacheslav.galaktionov@oktetlabs.ru>
> Subject: [PATCH v4] ethdev: fix representor port ID search by name
>
> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>
> Getting a list of representors from a representor does not make sense.
> Instead, a parent device should be used.
>
> To this end, extend the rte_eth_dev_data structure to include the port ID of
> the backing device for representors.
>
> Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> ---
> The new field is added into the hole in rte_eth_dev_data structure.
> The patch does not change ABI, but extra care is required since ABI check is
> disabled for the structure because of the libabigail bug [1].
>
> Potentially it is bad for out-of-tree drivers which implement representors but
> do not fill in a new parert_port_id field in rte_eth_dev_data structure. Do we
> care?
>
> mlx5 changes should be reviwed by maintainers very carefully, since we are
> not sure if we patch it correctly.
>
> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
>
> v4:
> - apply mlx5 review notes: remove fallback from generic ethdev
> code and add fallback to mlx5 code to handle legacy usecase
>
> v3:
> - fix mlx5 build breakage
>
> v2:
> - fix mlx5 review notes
> - try device port ID first before parent in order to address
> backward compatibility issue
>
> drivers/net/bnxt/bnxt_reps.c | 1 +
> drivers/net/enic/enic_vf_representor.c | 1 +
> drivers/net/i40e/i40e_vf_representor.c | 1 +
> drivers/net/ice/ice_dcf_vf_representor.c | 1 +
> drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
> drivers/net/mlx5/linux/mlx5_os.c | 13 +++++++++++++
> drivers/net/mlx5/windows/mlx5_os.c | 13 +++++++++++++
> lib/ethdev/ethdev_driver.h | 6 +++---
> lib/ethdev/rte_class_eth.c | 2 +-
> lib/ethdev/rte_ethdev.c | 8 ++++----
> lib/ethdev/rte_ethdev_core.h | 6 ++++++
> 11 files changed, 45 insertions(+), 8 deletions(-)
>
For i40e part,
Acked-by: Beilei Xing <beilei.xing@intel.com>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] doc: announce change in vfio dma mapping
2021-08-31 16:01 3% ` Ferruh Yigit
@ 2021-09-01 1:41 0% ` Ding, Xuan
2021-09-01 9:56 3% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Ding, Xuan @ 2021-09-01 1:41 UTC (permalink / raw)
To: Yigit, Ferruh, dev, Burakov, Anatoly
Cc: maxime.coquelin, Xia, Chenbo, Hu, Jiayu, Richardson, Bruce
Hi Ferruh,
> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Wednesday, September 1, 2021 12:01 AM
> To: Ding, Xuan <xuan.ding@intel.com>; dev@dpdk.org; Burakov, Anatoly
> <anatoly.burakov@intel.com>
> Cc: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>; Hu,
> Jiayu <jiayu.hu@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>
> Subject: Re: [PATCH] doc: announce change in vfio dma mapping
>
> On 8/31/2021 2:10 PM, Xuan Ding wrote:
> > Currently, the VFIO subsystem will compact adjacent DMA regions for the
> > purposes of saving space in the internal list of mappings. This has a
> > side effect of compacting two separate mappings that just happen to be
> > adjacent in memory. Since VFIO implementation on IA platforms also does
> > not allow partial unmapping of memory mapped for DMA, the current
> DPDK
> > VFIO implementation will prevent unmapping of accidentally adjacent
> > maps even though it could have been unmapped [1].
> >
> > The proper fix for this issue is to change the VFIO DMA mapping API to
> > also include page size, and always map memory page-by-page.
> >
> > [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
> >
> > Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> > ---
> > doc/guides/rel_notes/deprecation.rst | 3 +++
> > 1 file changed, 3 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> > index 76a4abfd6b..1234420caf 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -287,3 +287,6 @@ Deprecation Notices
> > reserved bytes to 2 (from 3), and use 1 byte to indicate warnings and
> other
> > information from the crypto/security operation. This field will be used to
> > communicate events such as soft expiry with IPsec in lookaside mode.
> > +
> > +* vfio: the functions `rte_vfio_container_dma_map` will be amended to
> > + include page size. This change is targeted for DPDK 22.02.
> >
>
> Is this means adding a new parameter to API?
> If so this is an ABI/API break and we can't do this change in the 22.02.
Our original plan is add a new parameter in order not to use a new function name, so you mean, any changes to the API can only be done in the LTS version?
If so, we can only add a new API and retire the old one in 22.11.
Thanks for you classification.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v4] ethdev: fix representor port ID search by name
2021-08-31 16:32 0% ` Wang, Haiyue
@ 2021-08-31 16:37 0% ` Andrew Rybchenko
0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-08-31 16:37 UTC (permalink / raw)
To: Wang, Haiyue, Ajit Khaparde, Somnath Kotur, Daley, John,
Hyong Youb Kim, Xing, Beilei, Yang, Qiming, Zhang, Qi Z,
Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko,
Thomas Monjalon, Yigit, Ferruh
Cc: dev, Viacheslav Galaktionov
On 8/31/21 7:32 PM, Wang, Haiyue wrote:
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Wednesday, September 1, 2021 00:06
>> To: Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur <somnath.kotur@broadcom.com>; Daley,
>> John <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>; Xing, Beilei <beilei.xing@intel.com>;
>> Yang, Qiming <qiming.yang@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Wang, Haiyue
>> <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>;
>> Viacheslav Ovsiienko <viacheslavo@nvidia.com>; Thomas Monjalon <thomas@monjalon.net>; Yigit, Ferruh
>> <ferruh.yigit@intel.com>
>> Cc: dev@dpdk.org; Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>> Subject: [PATCH v4] ethdev: fix representor port ID search by name
>>
>> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>>
>> Getting a list of representors from a representor does not make sense.
>> Instead, a parent device should be used.
>>
>> To this end, extend the rte_eth_dev_data structure to include the port ID
>> of the backing device for representors.
>>
>> Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> ---
>> The new field is added into the hole in rte_eth_dev_data structure.
>> The patch does not change ABI, but extra care is required since ABI
>> check is disabled for the structure because of the libabigail bug [1].
>>
>> Potentially it is bad for out-of-tree drivers which implement
>> representors but do not fill in a new parert_port_id field in
>> rte_eth_dev_data structure. Do we care?
>
> Set the `parent_port_id` to ' RTE_MAX_ETHPORTS' as an invalid port ID
> in rte_eth_dev_allocate ?
I like the idea. It should be safer this way. Many thanks.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v4] ethdev: fix representor port ID search by name
2021-08-31 16:06 3% ` [dpdk-dev] [PATCH v4] " Andrew Rybchenko
@ 2021-08-31 16:32 0% ` Wang, Haiyue
2021-08-31 16:37 0% ` Andrew Rybchenko
2021-09-01 5:15 0% ` Xing, Beilei
2021-09-01 14:55 0% ` Ferruh Yigit
2 siblings, 1 reply; 200+ results
From: Wang, Haiyue @ 2021-08-31 16:32 UTC (permalink / raw)
To: Andrew Rybchenko, Ajit Khaparde, Somnath Kotur, Daley, John,
Hyong Youb Kim, Xing, Beilei, Yang, Qiming, Zhang, Qi Z,
Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko,
Thomas Monjalon, Yigit, Ferruh
Cc: dev, Viacheslav Galaktionov
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Wednesday, September 1, 2021 00:06
> To: Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur <somnath.kotur@broadcom.com>; Daley,
> John <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>; Xing, Beilei <beilei.xing@intel.com>;
> Yang, Qiming <qiming.yang@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Wang, Haiyue
> <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>;
> Viacheslav Ovsiienko <viacheslavo@nvidia.com>; Thomas Monjalon <thomas@monjalon.net>; Yigit, Ferruh
> <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org; Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> Subject: [PATCH v4] ethdev: fix representor port ID search by name
>
> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>
> Getting a list of representors from a representor does not make sense.
> Instead, a parent device should be used.
>
> To this end, extend the rte_eth_dev_data structure to include the port ID
> of the backing device for representors.
>
> Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> ---
> The new field is added into the hole in rte_eth_dev_data structure.
> The patch does not change ABI, but extra care is required since ABI
> check is disabled for the structure because of the libabigail bug [1].
>
> Potentially it is bad for out-of-tree drivers which implement
> representors but do not fill in a new parert_port_id field in
> rte_eth_dev_data structure. Do we care?
Set the `parent_port_id` to ' RTE_MAX_ETHPORTS' as an invalid port ID
in rte_eth_dev_allocate ?
>
> mlx5 changes should be reviwed by maintainers very carefully, since
> we are not sure if we patch it correctly.
>
> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
>
> v4:
> - apply mlx5 review notes: remove fallback from generic ethdev
> code and add fallback to mlx5 code to handle legacy usecase
>
> v3:
> - fix mlx5 build breakage
>
> v2:
> - fix mlx5 review notes
> - try device port ID first before parent in order to address
> backward compatibility issue
>
> drivers/net/bnxt/bnxt_reps.c | 1 +
> drivers/net/enic/enic_vf_representor.c | 1 +
> drivers/net/i40e/i40e_vf_representor.c | 1 +
> drivers/net/ice/ice_dcf_vf_representor.c | 1 +
> drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
> drivers/net/mlx5/linux/mlx5_os.c | 13 +++++++++++++
> drivers/net/mlx5/windows/mlx5_os.c | 13 +++++++++++++
> lib/ethdev/ethdev_driver.h | 6 +++---
> lib/ethdev/rte_class_eth.c | 2 +-
> lib/ethdev/rte_ethdev.c | 8 ++++----
> lib/ethdev/rte_ethdev_core.h | 6 ++++++
> 11 files changed, 45 insertions(+), 8 deletions(-)
>
For ixgbe_vf, ice_dcf
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
> --
> 2.30.2
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v1] bbdev: remove experimental tag from API
@ 2021-08-31 16:25 3% ` Nicolas Chautru
1 sibling, 0 replies; 200+ results
From: Nicolas Chautru @ 2021-08-31 16:25 UTC (permalink / raw)
To: dev, gakhil
Cc: thomas, trix, hemant.agrawal, mingshan.zhang, arun.joshi,
david.marchand, mdr, Nicolas Chautru
This promotes the bbdev interface to stable.
Overdue for some time as bbdev interface has been stable.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 2 ++
lib/bbdev/rte_bbdev.h | 32 --------------------------------
lib/bbdev/rte_bbdev_op.h | 9 ---------
lib/bbdev/rte_bbdev_pmd.h | 7 -------
lib/bbdev/version.map | 2 +-
5 files changed, 3 insertions(+), 49 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d707a55..2b1edb5 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -83,6 +83,8 @@ API Changes
This section is a comment. Do not overwrite or remove it.
Also, make sure to start the actual text at the margin.
=======================================================
+* bbdev: the experimental tag is dropped from the bbdev library, and its
+ interfaces are considered stable as of DPDK 21.11.
ABI Changes
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 7017124..bff08a8 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -10,10 +10,6 @@
*
* Wireless base band device abstraction APIs.
*
- * @warning
- * @b EXPERIMENTAL:
- * All functions in this file may be changed or removed without prior notice.
- *
* This API allows an application to discover, configure and use a device to
* process operations. An asynchronous API (enqueue, followed by later dequeue)
* is used for processing operations.
@@ -55,7 +51,6 @@ enum rte_bbdev_state {
* @return
* The total number of usable devices.
*/
-__rte_experimental
uint16_t
rte_bbdev_count(void);
@@ -68,7 +63,6 @@ enum rte_bbdev_state {
* @return
* true if device ID is valid and device is attached, false otherwise.
*/
-__rte_experimental
bool
rte_bbdev_is_valid(uint16_t dev_id);
@@ -82,7 +76,6 @@ enum rte_bbdev_state {
* - The next device, or
* - RTE_BBDEV_MAX_DEVS if none found
*/
-__rte_experimental
uint16_t
rte_bbdev_find_next(uint16_t dev_id);
@@ -112,7 +105,6 @@ enum rte_bbdev_state {
* - -EBUSY if the identified device has already started
* - -ENOMEM if unable to allocate memory
*/
-__rte_experimental
int
rte_bbdev_setup_queues(uint16_t dev_id, uint16_t num_queues, int socket_id);
@@ -130,7 +122,6 @@ enum rte_bbdev_state {
* - -EBUSY if the identified device has already started
* - -ENOTSUP if the interrupts are not supported by the device
*/
-__rte_experimental
int
rte_bbdev_intr_enable(uint16_t dev_id);
@@ -160,7 +151,6 @@ struct rte_bbdev_queue_conf {
* - EINVAL if the identified queue size or priority are invalid
* - EBUSY if the identified queue or its device have already started
*/
-__rte_experimental
int
rte_bbdev_queue_configure(uint16_t dev_id, uint16_t queue_id,
const struct rte_bbdev_queue_conf *conf);
@@ -176,7 +166,6 @@ struct rte_bbdev_queue_conf {
* - 0 on success
* - negative value on failure - as returned from PMD driver
*/
-__rte_experimental
int
rte_bbdev_start(uint16_t dev_id);
@@ -190,7 +179,6 @@ struct rte_bbdev_queue_conf {
* @return
* - 0 on success
*/
-__rte_experimental
int
rte_bbdev_stop(uint16_t dev_id);
@@ -204,7 +192,6 @@ struct rte_bbdev_queue_conf {
* @return
* - 0 on success
*/
-__rte_experimental
int
rte_bbdev_close(uint16_t dev_id);
@@ -222,7 +209,6 @@ struct rte_bbdev_queue_conf {
* - 0 on success
* - negative value on failure - as returned from PMD driver
*/
-__rte_experimental
int
rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id);
@@ -238,7 +224,6 @@ struct rte_bbdev_queue_conf {
* - 0 on success
* - negative value on failure - as returned from PMD driver
*/
-__rte_experimental
int
rte_bbdev_queue_stop(uint16_t dev_id, uint16_t queue_id);
@@ -272,7 +257,6 @@ struct rte_bbdev_stats {
* - 0 on success
* - EINVAL if invalid parameter pointer is provided
*/
-__rte_experimental
int
rte_bbdev_stats_get(uint16_t dev_id, struct rte_bbdev_stats *stats);
@@ -284,7 +268,6 @@ struct rte_bbdev_stats {
* @return
* - 0 on success
*/
-__rte_experimental
int
rte_bbdev_stats_reset(uint16_t dev_id);
@@ -347,7 +330,6 @@ struct rte_bbdev_info {
* - 0 on success
* - EINVAL if invalid parameter pointer is provided
*/
-__rte_experimental
int
rte_bbdev_info_get(uint16_t dev_id, struct rte_bbdev_info *dev_info);
@@ -374,7 +356,6 @@ struct rte_bbdev_queue_info {
* - 0 on success
* - EINVAL if invalid parameter pointer is provided
*/
-__rte_experimental
int
rte_bbdev_queue_info_get(uint16_t dev_id, uint16_t queue_id,
struct rte_bbdev_queue_info *queue_info);
@@ -490,7 +471,6 @@ struct __rte_cache_aligned rte_bbdev {
* The number of operations actually enqueued (this is the number of processed
* entries in the @p ops array).
*/
-__rte_experimental
static inline uint16_t
rte_bbdev_enqueue_enc_ops(uint16_t dev_id, uint16_t queue_id,
struct rte_bbdev_enc_op **ops, uint16_t num_ops)
@@ -521,7 +501,6 @@ struct __rte_cache_aligned rte_bbdev {
* The number of operations actually enqueued (this is the number of processed
* entries in the @p ops array).
*/
-__rte_experimental
static inline uint16_t
rte_bbdev_enqueue_dec_ops(uint16_t dev_id, uint16_t queue_id,
struct rte_bbdev_dec_op **ops, uint16_t num_ops)
@@ -552,7 +531,6 @@ struct __rte_cache_aligned rte_bbdev {
* The number of operations actually enqueued (this is the number of processed
* entries in the @p ops array).
*/
-__rte_experimental
static inline uint16_t
rte_bbdev_enqueue_ldpc_enc_ops(uint16_t dev_id, uint16_t queue_id,
struct rte_bbdev_enc_op **ops, uint16_t num_ops)
@@ -583,7 +561,6 @@ struct __rte_cache_aligned rte_bbdev {
* The number of operations actually enqueued (this is the number of processed
* entries in the @p ops array).
*/
-__rte_experimental
static inline uint16_t
rte_bbdev_enqueue_ldpc_dec_ops(uint16_t dev_id, uint16_t queue_id,
struct rte_bbdev_dec_op **ops, uint16_t num_ops)
@@ -616,7 +593,6 @@ struct __rte_cache_aligned rte_bbdev {
* The number of operations actually dequeued (this is the number of entries
* copied into the @p ops array).
*/
-__rte_experimental
static inline uint16_t
rte_bbdev_dequeue_enc_ops(uint16_t dev_id, uint16_t queue_id,
struct rte_bbdev_enc_op **ops, uint16_t num_ops)
@@ -649,7 +625,6 @@ struct __rte_cache_aligned rte_bbdev {
* copied into the @p ops array).
*/
-__rte_experimental
static inline uint16_t
rte_bbdev_dequeue_dec_ops(uint16_t dev_id, uint16_t queue_id,
struct rte_bbdev_dec_op **ops, uint16_t num_ops)
@@ -681,7 +656,6 @@ struct __rte_cache_aligned rte_bbdev {
* The number of operations actually dequeued (this is the number of entries
* copied into the @p ops array).
*/
-__rte_experimental
static inline uint16_t
rte_bbdev_dequeue_ldpc_enc_ops(uint16_t dev_id, uint16_t queue_id,
struct rte_bbdev_enc_op **ops, uint16_t num_ops)
@@ -712,7 +686,6 @@ struct __rte_cache_aligned rte_bbdev {
* The number of operations actually dequeued (this is the number of entries
* copied into the @p ops array).
*/
-__rte_experimental
static inline uint16_t
rte_bbdev_dequeue_ldpc_dec_ops(uint16_t dev_id, uint16_t queue_id,
struct rte_bbdev_dec_op **ops, uint16_t num_ops)
@@ -764,7 +737,6 @@ typedef void (*rte_bbdev_cb_fn)(uint16_t dev_id,
* @return
* Zero on success, negative value on failure.
*/
-__rte_experimental
int
rte_bbdev_callback_register(uint16_t dev_id, enum rte_bbdev_event_type event,
rte_bbdev_cb_fn cb_fn, void *cb_arg);
@@ -788,7 +760,6 @@ typedef void (*rte_bbdev_cb_fn)(uint16_t dev_id,
* - EINVAL if invalid parameter pointer is provided
* - EAGAIN if the provided callback pointer does not exist
*/
-__rte_experimental
int
rte_bbdev_callback_unregister(uint16_t dev_id, enum rte_bbdev_event_type event,
rte_bbdev_cb_fn cb_fn, void *cb_arg);
@@ -809,7 +780,6 @@ typedef void (*rte_bbdev_cb_fn)(uint16_t dev_id,
* - 0 on success
* - negative value on failure - as returned from PMD driver
*/
-__rte_experimental
int
rte_bbdev_queue_intr_enable(uint16_t dev_id, uint16_t queue_id);
@@ -826,7 +796,6 @@ typedef void (*rte_bbdev_cb_fn)(uint16_t dev_id,
* - 0 on success
* - negative value on failure - as returned from PMD driver
*/
-__rte_experimental
int
rte_bbdev_queue_intr_disable(uint16_t dev_id, uint16_t queue_id);
@@ -854,7 +823,6 @@ typedef void (*rte_bbdev_cb_fn)(uint16_t dev_id,
* - ENOTSUP if interrupts are not supported by the identified device
* - negative value on failure - as returned from PMD driver
*/
-__rte_experimental
int
rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
void *data);
diff --git a/lib/bbdev/rte_bbdev_op.h b/lib/bbdev/rte_bbdev_op.h
index f946842..ef3ecdb 100644
--- a/lib/bbdev/rte_bbdev_op.h
+++ b/lib/bbdev/rte_bbdev_op.h
@@ -9,9 +9,6 @@
* @file rte_bbdev_op.h
*
* Defines wireless base band layer 1 operations and capabilities
- *
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
*/
#ifdef __cplusplus
@@ -815,7 +812,6 @@ struct rte_bbdev_op_pool_private {
* Operation type as string or NULL if op_type is invalid
*
*/
-__rte_experimental
const char*
rte_bbdev_op_type_str(enum rte_bbdev_op_type op_type);
@@ -839,7 +835,6 @@ struct rte_bbdev_op_pool_private {
* - Pointer to a mempool on success,
* - NULL pointer on failure.
*/
-__rte_experimental
struct rte_mempool *
rte_bbdev_op_pool_create(const char *name, enum rte_bbdev_op_type type,
unsigned int num_elements, unsigned int cache_size,
@@ -859,7 +854,6 @@ struct rte_mempool *
* - 0 on success
* - EINVAL if invalid mempool is provided
*/
-__rte_experimental
static inline int
rte_bbdev_enc_op_alloc_bulk(struct rte_mempool *mempool,
struct rte_bbdev_enc_op **ops, uint16_t num_ops)
@@ -896,7 +890,6 @@ struct rte_mempool *
* - 0 on success
* - EINVAL if invalid mempool is provided
*/
-__rte_experimental
static inline int
rte_bbdev_dec_op_alloc_bulk(struct rte_mempool *mempool,
struct rte_bbdev_dec_op **ops, uint16_t num_ops)
@@ -929,7 +922,6 @@ struct rte_mempool *
* @param num_ops
* Number of structures
*/
-__rte_experimental
static inline void
rte_bbdev_dec_op_free_bulk(struct rte_bbdev_dec_op **ops, unsigned int num_ops)
{
@@ -947,7 +939,6 @@ struct rte_mempool *
* @param num_ops
* Number of structures
*/
-__rte_experimental
static inline void
rte_bbdev_enc_op_free_bulk(struct rte_bbdev_enc_op **ops, unsigned int num_ops)
{
diff --git a/lib/bbdev/rte_bbdev_pmd.h b/lib/bbdev/rte_bbdev_pmd.h
index 237e336..dd0e359 100644
--- a/lib/bbdev/rte_bbdev_pmd.h
+++ b/lib/bbdev/rte_bbdev_pmd.h
@@ -10,9 +10,6 @@
*
* Wireless base band driver-facing APIs.
*
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
* This API provides the mechanism for device drivers to register with the
* bbdev interface. User applications should not use this API.
*/
@@ -43,7 +40,6 @@
* @return
* - Slot in the rte_bbdev array for a new device;
*/
-__rte_experimental
struct rte_bbdev *
rte_bbdev_allocate(const char *name);
@@ -56,7 +52,6 @@ struct rte_bbdev *
* @return
* - 0 on success, negative on error
*/
-__rte_experimental
int
rte_bbdev_release(struct rte_bbdev *bbdev);
@@ -71,7 +66,6 @@ struct rte_bbdev *
* - NULL otherwise
*
*/
-__rte_experimental
struct rte_bbdev *
rte_bbdev_get_named_dev(const char *name);
@@ -190,7 +184,6 @@ struct rte_bbdev_ops {
* @param ret_param
* To pass data back to user application.
*/
-__rte_experimental
void
rte_bbdev_pmd_callback_process(struct rte_bbdev *dev,
enum rte_bbdev_event_type event, void *ret_param);
diff --git a/lib/bbdev/version.map b/lib/bbdev/version.map
index 3624eb1..cce3f3c 100644
--- a/lib/bbdev/version.map
+++ b/lib/bbdev/version.map
@@ -1,4 +1,4 @@
-EXPERIMENTAL {
+DPDK_22 {
global:
rte_bbdev_allocate;
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v4] ethdev: fix representor port ID search by name
2021-08-20 12:18 3% ` [dpdk-dev] [PATCH v3] " Andrew Rybchenko
@ 2021-08-31 16:06 3% ` Andrew Rybchenko
2021-08-31 16:32 0% ` Wang, Haiyue
` (2 more replies)
2 siblings, 3 replies; 200+ results
From: Andrew Rybchenko @ 2021-08-31 16:06 UTC (permalink / raw)
To: Ajit Khaparde, Somnath Kotur, John Daley, Hyong Youb Kim,
Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang, Matan Azrad,
Shahaf Shuler, Viacheslav Ovsiienko, Thomas Monjalon,
Ferruh Yigit
Cc: dev, Viacheslav Galaktionov
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Getting a list of representors from a representor does not make sense.
Instead, a parent device should be used.
To this end, extend the rte_eth_dev_data structure to include the port ID
of the backing device for representors.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
The new field is added into the hole in rte_eth_dev_data structure.
The patch does not change ABI, but extra care is required since ABI
check is disabled for the structure because of the libabigail bug [1].
Potentially it is bad for out-of-tree drivers which implement
representors but do not fill in a new parert_port_id field in
rte_eth_dev_data structure. Do we care?
mlx5 changes should be reviwed by maintainers very carefully, since
we are not sure if we patch it correctly.
[1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
v4:
- apply mlx5 review notes: remove fallback from generic ethdev
code and add fallback to mlx5 code to handle legacy usecase
v3:
- fix mlx5 build breakage
v2:
- fix mlx5 review notes
- try device port ID first before parent in order to address
backward compatibility issue
drivers/net/bnxt/bnxt_reps.c | 1 +
drivers/net/enic/enic_vf_representor.c | 1 +
drivers/net/i40e/i40e_vf_representor.c | 1 +
drivers/net/ice/ice_dcf_vf_representor.c | 1 +
drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
drivers/net/mlx5/linux/mlx5_os.c | 13 +++++++++++++
drivers/net/mlx5/windows/mlx5_os.c | 13 +++++++++++++
lib/ethdev/ethdev_driver.h | 6 +++---
lib/ethdev/rte_class_eth.c | 2 +-
lib/ethdev/rte_ethdev.c | 8 ++++----
lib/ethdev/rte_ethdev_core.h | 6 ++++++
11 files changed, 45 insertions(+), 8 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index bdbad53b7d..902591cd39 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
eth_dev->data->representor_id = rep_params->vf_id;
+ eth_dev->data->parent_port_id = rep_params->parent_dev->data->port_id;
rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
index 79dd6e5640..6ee7967ce9 100644
--- a/drivers/net/enic/enic_vf_representor.c
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -662,6 +662,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
eth_dev->data->representor_id = vf->vf_id;
+ eth_dev->data->parent_port_id = pf->port_id;
eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
sizeof(struct rte_ether_addr) *
ENIC_UNICAST_PERFECT_FILTERS, 0);
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 0481b55381..04d1842a5c 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
ethdev->data->representor_id = representor->vf_id;
+ ethdev->data->parent_port_id = pf->dev_data->port_id;
/* Setting the number queues allocated to the VF */
ethdev->data->nb_rx_queues = vf->vsi->nb_qps;
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 970461f3e9..c7cd3fd290 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -418,6 +418,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
vf_rep_eth_dev->data->representor_id = repr->vf_id;
+ vf_rep_eth_dev->data->parent_port_id = repr->dcf_eth_dev->data->port_id;
vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index d5b636a194..7a2063849e 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
ethdev->data->representor_id = representor->vf_id;
+ ethdev->data->parent_port_id = representor->pf_ethdev->data->port_id;
/* Set representor device ops */
ethdev->dev_ops = &ixgbe_vf_representor_dev_ops;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 5f8766aa48..adbb9fbb10 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1677,6 +1677,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
if (priv->representor) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
eth_dev->data->representor_id = priv->representor_id;
+ MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
+ struct mlx5_priv *opriv =
+ rte_eth_devices[port_id].data->dev_private;
+ if (opriv &&
+ opriv->master &&
+ opriv->domain_id == priv->domain_id &&
+ opriv->sh == priv->sh) {
+ eth_dev->data->parent_port_id = port_id;
+ break;
+ }
+ }
+ if (port_id >= RTE_MAX_ETHPORTS)
+ eth_dev->data->parent_port_id = eth_dev->data->port_id;
}
priv->mp_id.port_id = eth_dev->data->port_id;
strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN);
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 7e1df1c751..c266b39696 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -543,6 +543,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
if (priv->representor) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
eth_dev->data->representor_id = priv->representor_id;
+ MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
+ struct mlx5_priv *opriv =
+ rte_eth_devices[port_id].data->dev_private;
+ if (opriv &&
+ opriv->master &&
+ opriv->domain_id == priv->domain_id &&
+ opriv->sh == priv->sh) {
+ eth_dev->data->parent_port_id = port_id;
+ break;
+ }
+ }
+ if (port_id >= RTE_MAX_ETHPORTS)
+ eth_dev->data->parent_port_id = eth_dev->data->port_id;
}
/*
* Store associated network device interface index. This index
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 40e474aa7e..b940e6cb38 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1248,8 +1248,8 @@ struct rte_eth_devargs {
* For backward compatibility, if no representor info, direct
* map legacy VF (no controller and pf).
*
- * @param ethdev
- * Handle of ethdev port.
+ * @param port_id
+ * Port ID of the backing device.
* @param type
* Representor type.
* @param controller
@@ -1266,7 +1266,7 @@ struct rte_eth_devargs {
*/
__rte_internal
int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
enum rte_eth_representor_type type,
int controller, int pf, int representor_port,
uint16_t *repr_id);
diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
index 1fe5fa1f36..e3b7ab9728 100644
--- a/lib/ethdev/rte_class_eth.c
+++ b/lib/ethdev/rte_class_eth.c
@@ -95,7 +95,7 @@ eth_representor_cmp(const char *key __rte_unused,
c = i / (np * nf);
p = (i / nf) % np;
f = i % nf;
- if (rte_eth_representor_id_get(edev,
+ if (rte_eth_representor_id_get(edev->data->parent_port_id,
eth_da.type,
eth_da.nb_mh_controllers == 0 ? -1 :
eth_da.mh_controllers[c],
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 9d95cd11e1..228ef7bf23 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -5997,7 +5997,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
}
int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
enum rte_eth_representor_type type,
int controller, int pf, int representor_port,
uint16_t *repr_id)
@@ -6013,7 +6013,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
return -EINVAL;
/* Get PMD representor range info. */
- ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
+ ret = rte_eth_representor_info_get(port_id, NULL);
if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
controller == -1 && pf == -1) {
/* Direct mapping for legacy VF representor. */
@@ -6028,7 +6028,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
if (info == NULL)
return -ENOMEM;
info->nb_ranges_alloc = n;
- ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
+ ret = rte_eth_representor_info_get(port_id, info);
if (ret < 0)
goto out;
@@ -6047,7 +6047,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
continue;
if (info->ranges[i].id_end < info->ranges[i].id_base) {
RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
- ethdev->data->port_id, info->ranges[i].id_base,
+ port_id, info->ranges[i].id_base,
info->ranges[i].id_end, i);
continue;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..72fefa59c2 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -185,6 +185,12 @@ struct rte_eth_dev_data {
/**< Switch-specific identifier.
* Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
*/
+ uint16_t parent_port_id;
+ /**< Port ID of the backing device.
+ * This device will be used to query representor
+ * info and calculate representor IDs.
+ * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+ */
pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
uint64_t reserved_64s[4]; /**< Reserved for future fields */
--
2.30.2
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] doc: announce change in vfio dma mapping
@ 2021-08-31 16:01 3% ` Ferruh Yigit
2021-09-01 1:41 0% ` Ding, Xuan
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-08-31 16:01 UTC (permalink / raw)
To: Xuan Ding, dev, anatoly.burakov
Cc: maxime.coquelin, chenbo.xia, jiayu.hu, bruce.richardson
On 8/31/2021 2:10 PM, Xuan Ding wrote:
> Currently, the VFIO subsystem will compact adjacent DMA regions for the
> purposes of saving space in the internal list of mappings. This has a
> side effect of compacting two separate mappings that just happen to be
> adjacent in memory. Since VFIO implementation on IA platforms also does
> not allow partial unmapping of memory mapped for DMA, the current DPDK
> VFIO implementation will prevent unmapping of accidentally adjacent
> maps even though it could have been unmapped [1].
>
> The proper fix for this issue is to change the VFIO DMA mapping API to
> also include page size, and always map memory page-by-page.
>
> [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
>
> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 76a4abfd6b..1234420caf 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -287,3 +287,6 @@ Deprecation Notices
> reserved bytes to 2 (from 3), and use 1 byte to indicate warnings and other
> information from the crypto/security operation. This field will be used to
> communicate events such as soft expiry with IPsec in lookaside mode.
> +
> +* vfio: the functions `rte_vfio_container_dma_map` will be amended to
> + include page size. This change is targeted for DPDK 22.02.
>
Is this means adding a new parameter to API?
If so this is an ABI/API break and we can't do this change in the 22.02.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [EXT] [PATCH] doc: announce library refactor for ABI improvement
2021-08-26 15:44 4% ` Andrew Rybchenko
@ 2021-08-31 15:48 4% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-08-31 15:48 UTC (permalink / raw)
To: Andrew Rybchenko, Bruce Richardson, Akhil Goyal
Cc: Ferruh Yigit, dev, Konstantin Ananyev, Jerin Jacob Kollanukkaran
On 26/08/2021 16:44, Andrew Rybchenko wrote:
> On 8/26/21 2:04 PM, Bruce Richardson wrote:
>> On Thu, Aug 26, 2021 at 10:46:35AM +0000, Akhil Goyal wrote:
>>>> Target is to reduce the public interface surface to improve the ABI
>>>> stability and this is preparation for the longer term stable ABI
>>>> support.
>>>>
>>>> Mainly device abstraction layer libraries are impacted because they have
>>>> two interfaces, one is public interface to the applications and other is
>>>> internal interface to the drivers. Some driver/internal interface
>>>> structures/symbols are in the public interface by mistake, this work is
>>>> to clean them.
>>>> Also some libraries has 'static inline' functions for performance
>>>> reasons (like ones in the ethdev), this work plans to split the
>>>> structures and hide the part that is not used by inline functions.
>>>>
>>>> The need of the work for the stable ABI already discussed and planned by
>>>> the DPDK technical board:
>>>> https://mails.dpdk.org/archives/dev/2021-July/214662.html
>>>>
>>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>>> ---
>>> Acked-by: Akhil Goyal <gakhil@marvell.com>
>>
>> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
>>
>
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v3] ethdev: fix representor port ID search by name
2021-08-20 12:18 3% ` [dpdk-dev] [PATCH v3] " Andrew Rybchenko
@ 2021-08-31 15:41 0% ` Andrew Rybchenko
0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-08-31 15:41 UTC (permalink / raw)
To: Ajit Khaparde, Somnath Kotur, John Daley, Hyong Youb Kim,
Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang, Matan Azrad,
Shahaf Shuler, Viacheslav Ovsiienko, Thomas Monjalon,
Ferruh Yigit
Cc: dev, Viacheslav Galaktionov
On 8/20/21 3:18 PM, Andrew Rybchenko wrote:
> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>
> Getting a list of representors from a representor does not make sense.
> Instead, a parent device should be used.
>
> To this end, extend the rte_eth_dev_data structure to include the port ID
> of the parent device for representors.
>
> Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> ---
> The new field is added into the hole in rte_eth_dev_data structure.
> The patch does not change ABI, but extra care is required since ABI
> check is disabled for the structure because of the libabigail bug [1].
>
> Potentially it is bad for out-of-tree drivers which implement
> representors but do not fill in a new parert_port_id field in
> rte_eth_dev_data structure. Do we care?
>
> May be the patch should add lines to release notes, but I'd like
> to get initial feedback first.
>
> mlx5 changes should be reviwed by maintainers very carefully, since
> we are not sure if we patch it correctly.
>
> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
>
> v3:
> - fix mlx5 build breakage
>
> v2:
> - fix mlx5 review notes
> - try device port ID first before parent in order to address
> backward compatibility issue
>
> drivers/net/bnxt/bnxt_reps.c | 1 +
> drivers/net/enic/enic_vf_representor.c | 1 +
> drivers/net/i40e/i40e_vf_representor.c | 1 +
> drivers/net/ice/ice_dcf_vf_representor.c | 1 +
> drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
> drivers/net/mlx5/linux/mlx5_os.c | 17 +++++++++++++++++
> drivers/net/mlx5/windows/mlx5_os.c | 17 +++++++++++++++++
> lib/ethdev/ethdev_driver.h | 6 +++---
> lib/ethdev/rte_class_eth.c | 22 ++++++++++++++++++++--
> lib/ethdev/rte_ethdev.c | 8 ++++----
> lib/ethdev/rte_ethdev_core.h | 4 ++++
> 11 files changed, 70 insertions(+), 9 deletions(-)
There is later follow up in v2 review notes which should be
addressed. I'll send v4 tomorrow.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v10 3/3] maintainers: add new abi scripts
2021-08-31 14:50 3% ` [dpdk-dev] [PATCH v10 0/3] devtools: scripts to count and track symbols Ray Kinsella
2021-08-31 14:50 5% ` [dpdk-dev] [PATCH v10 1/3] devtools: script to track symbols over releases Ray Kinsella
2021-08-31 14:50 5% ` [dpdk-dev] [PATCH v10 2/3] devtools: script to send notifications of expired symbols Ray Kinsella
@ 2021-08-31 14:50 17% ` Ray Kinsella
2021-09-01 12:31 0% ` [dpdk-dev] [PATCH v10 0/3] devtools: scripts to count and track symbols Aaron Conole
3 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-08-31 14:50 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr, aconole
Add new abi management scripts to the MAINTAINERS file.
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
MAINTAINERS | 2 ++
1 file changed, 2 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 266f5ac1da..ff8245271f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -129,6 +129,8 @@ F: devtools/gen-abi.sh
F: devtools/libabigail.abignore
F: devtools/update-abi.sh
F: devtools/update_version_map_abi.py
+F: devtools/notify-symbol-maintainers.py
+F: devtools/symbol-tool.py
F: buildtools/check-symbols.sh
F: buildtools/map-list-symbol.sh
F: drivers/*/*/*.map
--
2.26.2
^ permalink raw reply [relevance 17%]
* [dpdk-dev] [PATCH v10 1/3] devtools: script to track symbols over releases
2021-08-31 14:50 3% ` [dpdk-dev] [PATCH v10 0/3] devtools: scripts to count and track symbols Ray Kinsella
@ 2021-08-31 14:50 5% ` Ray Kinsella
2021-08-31 14:50 5% ` [dpdk-dev] [PATCH v10 2/3] devtools: script to send notifications of expired symbols Ray Kinsella
` (2 subsequent siblings)
3 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-08-31 14:50 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr, aconole
This script tracks the growth of stable and experimental symbols
over releases since v19.11. The script has the ability to
count the added symbols between two dpdk releases, and to
list experimental symbols present in two dpdk releases
(expired symbols).
example usages:
Count symbols added since v19.11
$ devtools/symbol-tool.py count-symbols
Count symbols added since v20.11
$ devtools/symbol-tool.py count-symbols --releases v20.11,v21.05
List experimental symbols present in v20.11 and v21.05
$ devtools/symbol-tool.py list-expired --releases v20.11,v21.05
List experimental symbols in libraries only, present since v19.11
$ devtools/symbol-tool.py list-expired --directory lib
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
devtools/symbol-tool.py | 482 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 482 insertions(+)
create mode 100755 devtools/symbol-tool.py
diff --git a/devtools/symbol-tool.py b/devtools/symbol-tool.py
new file mode 100755
index 0000000000..3d093a0802
--- /dev/null
+++ b/devtools/symbol-tool.py
@@ -0,0 +1,482 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+# pylint: disable=invalid-name
+'''Tool to count or list symbols in each DPDK release'''
+from pathlib import Path
+import sys
+import os
+import subprocess
+import argparse
+from argparse import RawTextHelpFormatter
+import re
+import datetime
+try:
+ from parsley import makeGrammar
+except ImportError:
+ print('This script uses the package Parsley to parse C Mapfiles.\n'
+ 'This can be installed with \"pip install parsley".')
+ sys.exit()
+
+DESCRIPTION = '''
+This script tracks the growth of stable and experimental symbols
+over releases since v19.11. The script has the ability to
+count the added symbols between two dpdk releases, and to
+list experimental symbols present in two dpdk releases
+(expired symbols), including the name & email of the original contributor.
+
+example usages:
+
+Count symbols added since v19.11
+$ {s} count-symbols
+
+Count symbols added since v20.11
+$ {s} count-symbols --releases v20.11,v21.05
+
+List experimental symbols present in v20.11 and v21.05
+$ {s} list-expired --releases v20.11,v21.05
+
+List experimental symbols in libraries only, present since v19.11
+$ {s} list-expired --directory lib
+'''
+
+MAP_GRAMMAR = r"""
+
+ws = (' ' | '\r' | '\n' | '\t')*
+
+ABI_VER = ({})
+DPDK_VER = ('DPDK_' ABI_VER)
+ABI_NAME = ('INTERNAL' | 'EXPERIMENTAL' | DPDK_VER)
+comment = '#' (~'\n' anything)+ '\n'
+symbol = (~(';' | '}}' | '#') anything )+:c ';' -> ''.join(c)
+global = 'global:'
+local = 'local: *;'
+symbols = comment* symbol:s ws comment* -> s
+
+abi = (abi_section+):m -> dict(m)
+abi_section = (ws ABI_NAME:e ws '{{' ws global* (~local ws symbols)*:s ws local* ws '}}' ws DPDK_VER* ';' ws) -> (e,s)
+"""
+
+def get_abi_versions():
+ '''Returns a string of possible dpdk abi versions'''
+
+ year = datetime.date.today().year - 2000
+ tags = " |".join(['\'{}\''.format(i) \
+ for i in reversed(range(21, year + 1)) ])
+ tags = tags + ' | \'20.0.1\' | \'20.0\' | \'20\''
+
+ return tags
+
+def get_dpdk_releases():
+ '''Returns a list of dpdk release tags names since v19.11'''
+
+ year = datetime.date.today().year - 2000
+ year_range = "|".join("{}".format(i) for i in range(19,year + 1))
+ pattern = re.compile(r'^\"v(' + year_range + r')\.\d{2}\"$')
+
+ cmd = ['git', 'for-each-ref', '--sort=taggerdate', '--format', '"%(tag)"']
+ try:
+ result = subprocess.run(cmd, \
+ stdout=subprocess.PIPE, \
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ print("Failed to interogate git for release tags")
+ sys.exit()
+
+
+ tags = result.stdout.decode('utf-8').split('\n')
+
+ # find the non-rcs between now and v19.11
+ tags = [ tag.replace('\"','') \
+ for tag in reversed(tags) \
+ if pattern.match(tag) ][:-3]
+
+ return tags
+
+def fix_directory_name(path):
+ '''Prepend librte to the source directory name'''
+ mapfilepath1 = str(path.parent.name)
+ mapfilepath2 = str(path.parents[1])
+ mapfilepath = mapfilepath2 + '/librte_' + mapfilepath1
+
+ return mapfilepath
+
+def directory_renamed(path, rel):
+ '''Fix removal of the librte_ from the directory names'''
+
+ mapfilepath = fix_directory_name(path)
+ tagfile = '{}:{}/{}'.format(rel, mapfilepath, path.name)
+
+ try:
+ result = subprocess.run(['git', 'show', tagfile], \
+ stdout=subprocess.PIPE, \
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ result = None
+
+ return result
+
+def mapfile_renamed(path, rel):
+ '''Fix renaming of the map file'''
+ newfile = None
+
+ result = subprocess.run(['git', 'ls-tree', \
+ rel, str(path.parent) + '/'], \
+ stdout=subprocess.PIPE, \
+ stderr=subprocess.PIPE,
+ check=True)
+ dentries = result.stdout.decode('utf-8')
+ dentries = dentries.split('\n')
+
+ # filter entries looking for the map file
+ dentries = [dentry for dentry in dentries if dentry.endswith('.map')]
+ if len(dentries) > 1 or len(dentries) == 0:
+ return None
+
+ dparts = dentries[0].split('/')
+ newfile = dparts[len(dparts) - 1]
+
+ if newfile is not None:
+ tagfile = '{}:{}/{}'.format(rel, path.parent, newfile)
+
+ try:
+ result = subprocess.run(['git', 'show', tagfile], \
+ stdout=subprocess.PIPE, \
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ result = None
+
+ else:
+ result = None
+
+ return result
+
+def mapfile_and_directory_renamed(path, rel):
+ '''Fix renaming of the map file & the source directory'''
+ mapfilepath = Path("{}/{}".format(fix_directory_name(path),path.name))
+
+ return mapfile_renamed(mapfilepath, rel)
+
+FIX_STRATEGIES = [directory_renamed, \
+ mapfile_renamed, \
+ mapfile_and_directory_renamed]
+
+def get_symbols(map_parser, release, mapfile_path):
+ '''Count the symbols for a given release and mapfile'''
+ abi_sections = {}
+
+ tagfile = '{}:{}'.format(release,mapfile_path)
+ try:
+ result = subprocess.run(['git', 'show', tagfile], \
+ stdout=subprocess.PIPE, \
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ result = None
+
+ for fix_strategy in FIX_STRATEGIES:
+ if result is not None:
+ break
+ result = fix_strategy(mapfile_path, release)
+
+ if result is not None:
+ mapfile = result.stdout.decode('utf-8')
+ abi_sections = map_parser(mapfile).abi()
+
+ return abi_sections
+
+def get_terminal_rows():
+ '''Find the number of rows in the terminal'''
+
+ try:
+ return os.get_terminal_size().lines
+ except IOError:
+ return 0
+
+class SymbolOwner():
+ '''Find the symbols original contributors name and email'''
+ symbol_regex = {}
+ blame_regex = {'name' : r'author\s(.*)', \
+ 'email' : r'author-mail\s<(.*)>'}
+
+ def __init__(self, libpath, symbol):
+ self.libpath = libpath
+ self.symbol = symbol
+
+ #find variable definitions in C files, and functions in headers.
+ self.symbol_regex = \
+ {'*.c' : r'^(?!extern).*' + self.symbol + '[^()]*;', \
+ '*.h' : r'__rte_experimental(?:.*\n){0,2}.*' + self.symbol}
+
+ def find_symbol_location(self):
+ '''Find where the symbol is definited in the source'''
+ for key in self.symbol_regex:
+ for path in Path(self.libpath).rglob(key):
+ file_text = open(path).read()
+
+ #find where the symbol is defined, either preceeded by
+ #rte_experimental tag (functions) or followed by a ; (variables)
+
+ exp = self.symbol_regex[key]
+ pattern = re.compile(exp, re.MULTILINE)
+ search = pattern.search(file_text)
+
+ if search is not None:
+ symbol_pos = search.span()[1]
+ symbol_line = file_text.count('\n', 0, symbol_pos) + 1
+
+ return [str(path),symbol_line]
+ return None
+
+ def find_symbol_owner(self):
+ '''Find the symbols original contributors name and email'''
+ owners = {}
+ location = self.find_symbol_location()
+
+ if location is None:
+ return None
+
+ line = '-L {},{}'.format(location[1],location[1])
+ #git blame -p(orcelain) -L(ine) path
+ args = ['-p', line, location[0]]
+
+ try:
+ result = subprocess.run(['git', 'blame'] + args, \
+ stdout=subprocess.PIPE, \
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ return None
+
+ blame = result.stdout.decode('utf-8')
+ for key in self.blame_regex:
+ pattern = re.compile(self.blame_regex[key], re.MULTILINE)
+ match = pattern.search(blame)
+
+ owners[key] = match.groups()[0]
+
+ return owners
+
+class SymbolCountOutput():
+ '''Format the output to supported formats'''
+ output_fmt = ""
+ column_fmt = ""
+
+ def __init__(self, format_output, dpdk_releases):
+ self.OUTPUT_FORMATS[format_output](self,dpdk_releases)
+ self.column_titles = ['mapfile'] + dpdk_releases
+
+ self.terminal_rows = get_terminal_rows()
+ self.row = 0
+
+ def set_terminal_output(self,dpdk_rel):
+ '''Set the output format to Tabbed Separated Values'''
+
+ self.output_fmt = '{:<50}' + \
+ ''.join(['{:<6}{:<6}'] * (len(dpdk_rel)))
+ self.column_fmt = '{:50}' + \
+ ''.join(['{:<12}'] * (len(dpdk_rel)))
+
+ def set_csv_output(self,dpdk_rel):
+ '''Set the output format to Comma Separated Values'''
+
+ self.output_fmt = '{},' + \
+ ','.join(['{},{}'] * (len(dpdk_rel)))
+ self.column_fmt = '{},' + \
+ ','.join(['{},'] * (len(dpdk_rel)))
+
+ def print_columns(self):
+ '''Print column rows with release names'''
+ print(self.column_fmt.format(*self.column_titles))
+ self.row += 1
+
+ def print_row(self, mapfile, symbols):
+ '''Print row of symbol values'''
+ mapfile = str(mapfile)
+ print(self.output_fmt.format(*([mapfile] + symbols)))
+ self.row += 1
+
+ if((self.terminal_rows>0) and ((self.row % self.terminal_rows) == 0)):
+ self.print_columns()
+
+ OUTPUT_FORMATS = { None: set_terminal_output, \
+ 'terminal': set_terminal_output, \
+ 'csv': set_csv_output }
+
+class ListExpiredOutput():
+ '''Format the output to supported formats'''
+ output_fmt = ""
+ column_fmt = ""
+
+ def __init__(self, format_output, dpdk_releases):
+ self.terminal = True
+ self.OUTPUT_FORMATS[format_output](self,dpdk_releases)
+ self.column_titles = ['mapfile'] + \
+ ['expired (' + ','.join(dpdk_releases) + ')'] + \
+ ['contributor name', 'contributor email']
+
+ def set_terminal_output(self, _):
+ '''Set the output format to Tabbed Separated Values'''
+
+ self.output_fmt = '{:<50}{:<50}{:<25}{:<25}'
+ self.column_fmt = '{:50}{:50}{:25}{:25}'
+
+ def set_csv_output(self, _):
+ '''Set the output format to Comma Separated Values'''
+
+ self.output_fmt = '{},{},{},{}'
+ self.column_fmt = '{},{},{},{}'
+ self.terminal = False
+
+ def print_columns(self):
+ '''Print column rows with release names'''
+ print(self.column_fmt.format(*self.column_titles))
+
+ def print_row(self, mapfile, symbols, owner):
+ '''Print row of symbol values'''
+
+ for symbol in symbols:
+ mapfile = str(mapfile)
+ name = owner[symbol]['name'] \
+ if owner[symbol] is not None else ''
+ email = owner[symbol]['email'] \
+ if owner[symbol] is not None else ''
+
+ print(self.output_fmt.format(mapfile, symbol, name, email))
+ if self.terminal:
+ mapfile = ''
+
+ OUTPUT_FORMATS = { None: set_terminal_output, \
+ 'terminal': set_terminal_output, \
+ 'csv': set_csv_output }
+
+class CountSymbolsAction:
+ ''' Logic to count symbols added since a give release '''
+ IGNORE_SECTIONS = ['EXPERIMENTAL','INTERNAL']
+
+ def __init__(self, mapfile_path, mapfile_parser, format_output):
+ self.path = mapfile_path
+ self.parser = mapfile_parser
+ self.format_output = format_output
+ self.symbols_count = []
+
+ def add_mapfile(self, release):
+ ''' add a version mapfile '''
+ symbol_count = experimental_count = 0
+
+ symbols = get_symbols(self.parser, release, self.path)
+
+ # which versions are present, and we care about
+ abi_vers = [abi_ver \
+ for abi_ver in symbols \
+ if abi_ver not in self.IGNORE_SECTIONS]
+
+ for abi_ver in abi_vers:
+ symbol_count += len(symbols[abi_ver])
+
+ # count experimental symbols
+ if 'EXPERIMENTAL' in symbols.keys():
+ experimental_count = len(symbols['EXPERIMENTAL'])
+
+ self.symbols_count += [symbol_count, experimental_count]
+
+ def __del__(self):
+ self.format_output.print_row(self.path.parent, self.symbols_count)
+
+class ListExpiredAction:
+ ''' Logic to list expired symbols between two releases '''
+
+ def __init__(self, mapfile_path, mapfile_parser, format_output):
+ self.path = mapfile_path
+ self.parser = mapfile_parser
+ self.format_output = format_output
+ self.experimental_symbols = []
+
+ def add_mapfile(self, release):
+ ''' add a version mapfile '''
+ symbols = get_symbols(self.parser, release, self.path)
+
+ if 'EXPERIMENTAL' in symbols.keys():
+ experimental = [exp.strip() for exp in symbols['EXPERIMENTAL']]
+
+ self.experimental_symbols.append(experimental)
+
+ def __del__(self):
+ if len(self.experimental_symbols) != 2:
+ return
+
+ tmp = self.experimental_symbols
+ # find symbols present in both dpdk releases
+ intersect_syms = [sym for sym in tmp[0] if sym in tmp[1]]
+
+ # check for empty set
+ if intersect_syms == []:
+ return
+
+ sym_owner = {}
+ for sym in intersect_syms:
+ sym_owner[sym] = SymbolOwner(self.path.parent, sym).find_symbol_owner()
+
+ self.format_output.print_row(self.path.parent, intersect_syms, sym_owner)
+
+SRC_DIRECTORIES = 'drivers,lib'
+
+ACTIONS = {None: CountSymbolsAction, \
+ 'count-symbols': CountSymbolsAction, \
+ 'list-expired': ListExpiredAction}
+
+ACTION_OUTPUT = {None: SymbolCountOutput, \
+ 'count-symbols': SymbolCountOutput, \
+ 'list-expired': ListExpiredOutput}
+
+def main():
+ '''Main entry point'''
+
+ dpdk_releases = get_dpdk_releases()
+
+ parser = argparse.ArgumentParser(description=DESCRIPTION.format(s=__file__), \
+ formatter_class=RawTextHelpFormatter
+ )
+ parser.add_argument('mode', choices=['count-symbols','list-expired'])
+ parser.add_argument('--format-output', choices=['terminal','csv'], \
+ default='terminal')
+ parser.add_argument('--directory', choices=SRC_DIRECTORIES.split(','),
+ default=SRC_DIRECTORIES)
+ parser.add_argument('--releases', \
+ help='2 x comma separated release tags e.g. \'' \
+ + ','.join([dpdk_releases[0],dpdk_releases[-1]]) \
+ + '\'')
+ args = parser.parse_args()
+
+ if args.releases is not None:
+ dpdk_releases = args.releases.split(',')
+
+ if args.mode == 'list-expired':
+ if len(dpdk_releases) < 2:
+ sys.exit('Please specify two releases to compare ' \
+ 'in \'list-expired\' mode.')
+ dpdk_releases = [dpdk_releases[0], dpdk_releases[len(dpdk_releases) - 1]]
+
+ action = ACTIONS[args.mode]
+ format_output = ACTION_OUTPUT[args.mode](args.format_output, dpdk_releases)
+
+ map_grammar = MAP_GRAMMAR.format(get_abi_versions())
+ map_parser = makeGrammar(map_grammar, {})
+
+ format_output.print_columns()
+
+ for src_dir in args.directory.split(','):
+ for path in Path(src_dir).rglob('*.map'):
+ release_action = action(path, map_parser, format_output)
+
+ for release in dpdk_releases:
+ release_action.add_mapfile(release)
+
+ # all the magic happens in the destructor
+ del release_action
+
+if __name__ == '__main__':
+ main()
--
2.26.2
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v10 2/3] devtools: script to send notifications of expired symbols
2021-08-31 14:50 3% ` [dpdk-dev] [PATCH v10 0/3] devtools: scripts to count and track symbols Ray Kinsella
2021-08-31 14:50 5% ` [dpdk-dev] [PATCH v10 1/3] devtools: script to track symbols over releases Ray Kinsella
@ 2021-08-31 14:50 5% ` Ray Kinsella
2021-09-01 12:46 0% ` Aaron Conole
2021-09-01 13:01 5% ` David Marchand
2021-08-31 14:50 17% ` [dpdk-dev] [PATCH v10 3/3] maintainers: add new abi scripts Ray Kinsella
2021-09-01 12:31 0% ` [dpdk-dev] [PATCH v10 0/3] devtools: scripts to count and track symbols Aaron Conole
3 siblings, 2 replies; 200+ results
From: Ray Kinsella @ 2021-08-31 14:50 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr, aconole
Use this script with the output of the DPDK symbol tool, to notify
maintainers of expired symbols by email. You need to define the environment
variable DPDK_GETMAINTAINER_PATH for this tool to work.
Use terminal output to review the emails before sending.
e.g.
$ devtools/symbol-tool.py list-expired --format-output csv \
| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
devtools/notify_expired_symbols.py --format-output terminal
Then use email output to send the emails to the maintainers.
e.g.
$ devtools/symbol-tool.py list-expired --format-output csv \
| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
devtools/notify_expired_symbols.py --format-output email \
--smtp-server <server> --sender <someone@somewhere.com> \
--password <password> --cc <someone@somewhere.com>
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
devtools/notify-symbol-maintainers.py | 256 ++++++++++++++++++++++++++
1 file changed, 256 insertions(+)
create mode 100755 devtools/notify-symbol-maintainers.py
diff --git a/devtools/notify-symbol-maintainers.py b/devtools/notify-symbol-maintainers.py
new file mode 100755
index 0000000000..ee554687ff
--- /dev/null
+++ b/devtools/notify-symbol-maintainers.py
@@ -0,0 +1,256 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+# pylint: disable=invalid-name
+'''Tool to notify maintainers of expired symbols'''
+import smtplib
+import ssl
+import sys
+import subprocess
+import argparse
+from argparse import RawTextHelpFormatter
+import time
+from email.message import EmailMessage
+
+DESCRIPTION = '''
+Use this script with the output of the DPDK symbol tool, to notify maintainers
+and contributors of expired symbols by email. You need to define the environment
+variable DPDK_GETMAINTAINER_PATH for this tool to work.
+
+Use terminal output to review the emails before sending.
+e.g.
+$ devtools/symbol-tool.py list-expired --format-output csv \\
+| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
+{s} --format-output terminal
+
+Then use email output to send the emails to the maintainers.
+e.g.
+$ devtools/symbol-tool.py list-expired --format-output csv \\
+| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
+{s} --format-output email \\
+--smtp-server <server> --sender <someone@somewhere.com> --password <password> \\
+--cc <someone@somewhere.com>
+'''
+
+EMAIL_TEMPLATE = '''Hi there,
+
+Please note the symbols listed below have expired. In line with the DPDK ABI
+policy, they should be scheduled for removal, in the next DPDK release.
+
+For more information, please see the DPDK ABI Policy, section 3.5.3.
+https://doc.dpdk.org/guides/contributing/abi_policy.html
+
+Thanks,
+
+The DPDK Symbol Bot
+
+'''
+
+ABI_POLICY = 'doc/guides/contributing/abi_policy.rst'
+MAINTAINERS = 'MAINTAINERS'
+get_maintainer = ['devtools/get-maintainer.sh', \
+ '--email', '-f']
+
+def _get_maintainers(libpath):
+ '''Get the maintainers for given library'''
+ try:
+ cmd = get_maintainer + [libpath]
+ result = subprocess.run(cmd, \
+ stdout=subprocess.PIPE, \
+ stderr=subprocess.PIPE,
+ check=True)
+ except subprocess.CalledProcessError:
+ return None
+
+ if result is None:
+ return None
+
+ email = result.stdout.decode('utf-8')
+ if email == '':
+ return None
+
+ email = list(filter(None,email.split('\n')))
+ return email
+
+default_maintainers = _get_maintainers(ABI_POLICY) + \
+ _get_maintainers(MAINTAINERS)
+
+def get_maintainers(libpath):
+ '''Get the maintainers for given library'''
+ maintainers=_get_maintainers(libpath)
+
+ if maintainers is None:
+ maintainers = default_maintainers
+
+ return maintainers
+
+def get_message(library, symbols, config):
+ '''Build email message from symbols, config and maintainers'''
+ contributors = {}
+ message = {}
+ maintainers = get_maintainers(library)
+
+ if maintainers != default_maintainers:
+ message['CC'] = default_maintainers.copy()
+
+ if 'CC' in config:
+ message.setdefault('CC',[]).append(config['CC'])
+
+ message['Subject'] = 'Expired symbols in {}\n'.format(library)
+
+ body = EMAIL_TEMPLATE
+ body += '{:<50}{:<25}{:<25}\n'.format('Symbol','Contributor','Email')
+ for sym in symbols:
+ body += ('{:<50}{:<25}{:<25}\n'.format(sym,\
+ symbols[sym]['name'],
+ symbols[sym]['email'],
+ ))
+ email = symbols[sym]['email']
+ contributors[email] = ''
+
+ contributors = list(contributors.keys())
+
+ message['To'] = maintainers + contributors
+ message['Body'] = body
+
+ return message
+
+class OutputEmail():
+ '''Format the output for email'''
+ def __init__(self, config):
+ self.config = config
+
+ self.terminal = OutputTerminal(config)
+ context = ssl.create_default_context()
+
+ # Try to log in to server and send email
+ try:
+ self.server = smtplib.SMTP(config['smtp_server'], 587)
+ self.server.starttls(context=context) # Secure the connection
+ self.server.login(config['sender'], config['password'])
+ except Exception as exception:
+ print(exception)
+ raise exception
+
+ def message(self,message):
+ '''send email'''
+ self.terminal.message(message)
+
+ msg = EmailMessage()
+ msg.set_content(message.pop('Body'))
+
+ for key in message.keys():
+ msg[key] = message[key]
+
+ msg['From'] = self.config['sender']
+ msg['Reply-To'] = 'no-reply@dpdk.org'
+
+ self.server.send_message(msg)
+
+ time.sleep(1)
+
+ def __del__(self):
+ self.server.quit()
+
+class OutputTerminal(): # pylint: disable=too-few-public-methods
+ '''Format the output for the terminal'''
+ def __init__(self, config):
+ self.config = config
+
+ def message(self,message):
+ '''Print email to terminal'''
+
+ terminal = 'To:' + ', '.join(message['To']) + '\n'
+ if 'sender' in self.config.keys():
+ terminal += 'From:' + self.config['sender'] + '\n'
+
+ terminal += 'Reply-To:' + 'no-reply@dpdk.org' + '\n'
+
+ if 'CC' in message:
+ terminal += 'CC:' + ', '.join(message['CC']) + '\n'
+
+ terminal += 'Subject:' + message['Subject'] + '\n'
+ terminal += 'Body:' + message['Body'] + '\n'
+
+ print(terminal)
+ print('-' * 80)
+
+def parse_config(args):
+ '''put the command line args in the right places'''
+ config = {}
+ error_msg = None
+
+ outputs = {
+ None : OutputTerminal,
+ 'terminal' : OutputTerminal,
+ 'email' : OutputEmail
+ }
+
+ if args.format_output == 'email':
+ if args.smtp_server is None:
+ error_msg = 'SMTP server'
+ else:
+ config['smtp_server'] = args.smtp_server
+
+ if args.sender is None:
+ error_msg = 'sender'
+ else:
+ config['sender'] = args.sender
+
+ if args.password is None:
+ error_msg = 'password'
+ else:
+ config['password'] = args.password
+
+ if args.cc is not None:
+ config['CC'] = args.cc
+
+ if error_msg is not None:
+ print('Please specify a {} for email output'.format(error_msg))
+ return None
+
+ config['output'] = outputs[args.format_output]
+ return config
+
+def main():
+ '''Main entry point'''
+ parser = argparse.ArgumentParser(description=DESCRIPTION.format(s=__file__), \
+ formatter_class=RawTextHelpFormatter)
+ parser.add_argument('--format-output', choices=['terminal','email'], \
+ default='terminal')
+ parser.add_argument('--smtp-server')
+ parser.add_argument('--password')
+ parser.add_argument('--sender')
+ parser.add_argument('--cc')
+
+ args = parser.parse_args()
+ config = parse_config(args)
+ if config is None:
+ return
+
+ symbols = {}
+ lastlib = library = ''
+
+ output = config['output'](config)
+
+ for line in sys.stdin:
+ line = line.rstrip('\n')
+
+ if line.find('mapfile') >= 0:
+ continue
+ library, symbol, name, email = line.split(',')
+
+ if library != lastlib:
+ message = get_message(lastlib, symbols, config)
+ output.message(message)
+ symbols = {}
+
+ lastlib = library
+ symbols[symbol] = {'name' : name, 'email' : email}
+
+ #print the last library
+ message = get_message(lastlib, symbols, config)
+ output.message(message)
+
+if __name__ == '__main__':
+ main()
--
2.26.2
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v10 0/3] devtools: scripts to count and track symbols
@ 2021-08-31 14:50 3% ` Ray Kinsella
2021-08-31 14:50 5% ` [dpdk-dev] [PATCH v10 1/3] devtools: script to track symbols over releases Ray Kinsella
` (3 more replies)
2021-09-03 13:23 3% ` [dpdk-dev] [PATCH v11 " Ray Kinsella
` (3 subsequent siblings)
4 siblings, 4 replies; 200+ results
From: Ray Kinsella @ 2021-08-31 14:50 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr, aconole
Scripts to count and track the lifecycle of DPDK symbols.
The symbol-tool script reports on the growth of symbols over releases
and list expired symbols. The notify-symbol-maintainers script
consumes the input from symbol-tool and generates email notifications
of expired symbols.
v2: reworked to fix pylint errors
v3: sent with the correct in-reply-to
v4: fix typos picked up by the CI
v5: fix terminal_size & directory args
v6: added list-expired, to list expired experimental symbols
v7: fix typo in comments
v8: added tool to notify maintainers of expired symbols
v9: removed hardcoded emails addressed and script names
v10: added ability to identify and notify the original contributors
Ray Kinsella (3):
devtools: script to track symbols over releases
devtools: script to send notifications of expired symbols
maintainers: add new abi scripts
MAINTAINERS | 2 +
devtools/notify-symbol-maintainers.py | 256 ++++++++++++++
devtools/symbol-tool.py | 482 ++++++++++++++++++++++++++
3 files changed, 740 insertions(+)
create mode 100755 devtools/notify-symbol-maintainers.py
create mode 100755 devtools/symbol-tool.py
--
2.26.2
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] doc: announce change in dma mapping/unmapping
2021-08-26 10:14 0% ` Burakov, Anatoly
@ 2021-08-31 13:42 0% ` Ding, Xuan
0 siblings, 0 replies; 200+ results
From: Ding, Xuan @ 2021-08-31 13:42 UTC (permalink / raw)
To: Burakov, Anatoly, Richardson, Bruce
Cc: Yigit, Ferruh, maxime.coquelin, dev, Xia, Chenbo, Hu, Jiayu,
techboard, David Marchand
Hi,
> -----Original Message-----
> From: Burakov, Anatoly <anatoly.burakov@intel.com>
> Sent: Thursday, August 26, 2021 6:15 PM
> To: Richardson, Bruce <bruce.richardson@intel.com>
> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; Ding, Xuan
> <xuan.ding@intel.com>; dev@dpdk.org; maxime.coquelin@redhat.com; Xia,
> Chenbo <chenbo.xia@intel.com>; Hu, Jiayu <jiayu.hu@intel.com>;
> techboard@dpdk.org; David Marchand <david.marchand@redhat.com>
> Subject: Re: [PATCH] doc: announce change in dma mapping/unmapping
>
> On 26-Aug-21 11:09 AM, Bruce Richardson wrote:
> > On Thu, Aug 26, 2021 at 10:46:07AM +0100, Burakov, Anatoly wrote:
> >> On 26-Aug-21 10:29 AM, Ferruh Yigit wrote:
> >>> On 8/25/2021 12:47 PM, Burakov, Anatoly wrote:
> >>>> On 25-Aug-21 12:27 PM, Xuan Ding wrote:
> >>>>> Currently, the VFIO subsystem will compact adjacent DMA regions for
> the
> >>>>> purposes of saving space in the internal list of mappings. This has a
> >>>>> side effect of compacting two separate mappings that just happen to
> be
> >>>>> adjacent in memory. Since VFIO implementation on IA platforms also
> does
> >>>>> not allow partial unmapping of memory mapped for DMA, the current
> DPDK
> >>>>> VFIO implementation will prevent unmapping of accidentally adjacent
> >>>>> maps even though it could have been unmapped [1].
> >>>>>
> >>>>> The proper fix for this issue is to change the VFIO DMA mapping API
> to
> >>>>> also include page size, and always map memory page-by-page.
> >>>>>
> >>>>> [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
> >>>>>
> >>>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> >>>>> ---
> >>>>> doc/guides/rel_notes/deprecation.rst | 3 +++
> >>>>> 1 file changed, 3 insertions(+)
> >>>>>
> >>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
> >>>>> b/doc/guides/rel_notes/deprecation.rst
> >>>>> index 76a4abfd6b..272ffa993e 100644
> >>>>> --- a/doc/guides/rel_notes/deprecation.rst
> >>>>> +++ b/doc/guides/rel_notes/deprecation.rst
> >>>>> @@ -287,3 +287,6 @@ Deprecation Notices
> >>>>> reserved bytes to 2 (from 3), and use 1 byte to indicate warnings
> and other
> >>>>> information from the crypto/security operation. This field will be
> used to
> >>>>> communicate events such as soft expiry with IPsec in lookaside
> mode.
> >>>>> +
> >>>>> + * vfio: the functions `rte_vfio_container_dma_map` and
> >>>>> `rte_vfio_container_dma_unmap`
> >>>>> + will be amended to include page size. This change is targeted for
> DPDK 21.11.
> >>>>>
> >>>>
> >>>> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
> >>>>
> >>>
> >>> Techboard decision was to add a new API, instead of updating existing
> ones, to
> >>> not break the apps using this API.
> >>>
> >>> @Xuan, @Anatoly, can you please confirm if this will solve your problem?
> >>>
> >>
> >> I don't think adding a new API is a particularly good solution. The "new"
> >> API will be almost exactly as the old one, but adding one parameter. I
> don't
> >> expect code duplication to be an issue, but having two API's that do the
> >> same thing seems like it's rife for potential confusion.
> >>
> > Well, if one API is marked as deprecated, then there will be no confusion
> > for users, since using the wrong one will give a warning pointing to the
> > right one.
> >
> >> If we add a new API, we can then either remove the old API entirely in
> >> 22.11 (effectively renaming it), or we remove the new API in 22.11 and
> >> rename it back to the old function name. I don't think neither of these
> >> is a good solution, as we risk introducing more users for the API that
> >> will later change.
> > The new API will not be renamed to the old one, since that would break
> apps
> > using it without proper deprecation process. Removing the old one alone
> > would be the approach to be used, but it would be correctly following the
> > deprecation process and giving users at least 1 year, if no 2, of notice
> > about the change.
> >
> >>
> >> I think the pain of updating current software for 21.11 (while keeping
> >> compatibility with 20.11 ABI!) is going to happen regardless, and whether
> we
> >> decide to add a "temporary" new API or permanently rename the old one.
> It's
> >> (in my opinion) easier to just bite the bullet and update the function in
> >> 21.11.
> > I fail to see the issue with adding a new function. Whether we add a new
> > function or add a parameter to the existing one, code will have to change
> > either way. The advantage of the former scheme, adding the new function,
> is
> > that it shows that we are serious about our ABI/API compatibility process,
> > and are not lax about passing exceptions when other options are available.
> >
> >>
> >> However, if the tech board feels like adding a new API is a good solution,
> >> then okay, but we need to flesh out roadmap a bit better. Do we rename
> the
> >> old API, or do we add a temporary new API?
> >
> > New API added, old API deprecated. In future old API goes away leaving
> new
> > API as the only option.
> >
> > /Bruce
> >
>
> Okay, so it's settled then. I revoke my ack for this patch, and we need
> a new deprecation notice.
A new depreciation notice was sent [1], targeting for API change in DPDK-22.02.
For the unmapping issue mentioned before, we developed a compromised solution
to optimize the partial unmap logic in DPDK-21.11, and it is compatible with current
API.
[1] https://mails.dpdk.org/archives/dev/2021-August/217802.html
Thanks for your suggestion and support!
Regards,
Xuan
>
> --
> Thanks,
> Anatoly
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC] eventdev: uninline inline API functions
2021-08-31 12:28 0% ` Jerin Jacob
@ 2021-08-31 12:34 0% ` Mattias Rönnblom
0 siblings, 0 replies; 200+ results
From: Mattias Rönnblom @ 2021-08-31 12:34 UTC (permalink / raw)
To: Jerin Jacob; +Cc: Jerin Jacob, Pavan Nikhilesh, dpdk-dev, Bogdan Tanasa
On 2021-08-31 14:28, Jerin Jacob wrote:
> On Mon, Aug 30, 2021 at 9:30 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
>> Replace the inline functions in the eventdev user application API with
>> regular non-inline API calls. This allows for a cleaner and more
>> simple API/ABI, but might well also cause performance regressions.
>>
>> The purpose of this RFC patch is to allow for performance testing.
>>
>> The rte_eventdev struct declaration should be moved off the public
>> API.
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> I think we need to align all DPDK subsystems to a similar scheme.[1]
That makes perfect sense.
> I see -5% kind of regression-based on workload.
>
> [1]
> https://protect2.fireeye.com/v1/url?k=50900b74-0f0b325f-50904bef-866038973a15-e3d1ddd2888a6b58&q=1&e=96184cd6-f4b1-4ed9-98a3-ae921c747c7e&u=https%3A%2F%2Fpatches.dpdk.org%2Fproject%2Fdpdk%2Fpatch%2F20210820162834.12544-2-konstantin.ananyev%40intel.com%2F
>
>> ---
>> drivers/net/octeontx/octeontx_ethdev.h | 1 +
>> lib/eventdev/rte_event_eth_rx_adapter.h | 1 +
>> lib/eventdev/rte_event_eth_tx_adapter.c | 31 ++++++++
>> lib/eventdev/rte_event_eth_tx_adapter.h | 35 ++-------
>> lib/eventdev/rte_eventdev.c | 82 +++++++++++++++++++++
>> lib/eventdev/rte_eventdev.h | 94 +++----------------------
>> lib/eventdev/version.map | 4 ++
>> 7 files changed, 134 insertions(+), 114 deletions(-)
>>
>> diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
>> index b73515de37..9402105fcf 100644
>> --- a/drivers/net/octeontx/octeontx_ethdev.h
>> +++ b/drivers/net/octeontx/octeontx_ethdev.h
>> @@ -9,6 +9,7 @@
>>
>> #include <rte_common.h>
>> #include <ethdev_driver.h>
>> +#include <eventdev_pmd.h>
>> #include <rte_eventdev.h>
>> #include <rte_mempool.h>
>> #include <rte_memory.h>
>> diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h b/lib/eventdev/rte_event_eth_rx_adapter.h
>> index 182dd2e5dd..79f4822fb0 100644
>> --- a/lib/eventdev/rte_event_eth_rx_adapter.h
>> +++ b/lib/eventdev/rte_event_eth_rx_adapter.h
>> @@ -84,6 +84,7 @@ extern "C" {
>> #include <rte_service.h>
>>
>> #include "rte_eventdev.h"
>> +#include "eventdev_pmd.h"
>>
>> #define RTE_EVENT_ETH_RX_ADAPTER_MAX_INSTANCE 32
>>
>> diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
>> index 18c0359db7..74f88e6147 100644
>> --- a/lib/eventdev/rte_event_eth_tx_adapter.c
>> +++ b/lib/eventdev/rte_event_eth_tx_adapter.c
>> @@ -1154,6 +1154,37 @@ rte_event_eth_tx_adapter_start(uint8_t id)
>> return ret;
>> }
>>
>> +uint16_t
>> +rte_event_eth_tx_adapter_enqueue(uint8_t dev_id,
>> + uint8_t port_id,
>> + struct rte_event ev[],
>> + uint16_t nb_events,
>> + const uint8_t flags)
>> +{
>> + const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> +
>> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>> + if (dev_id >= RTE_EVENT_MAX_DEVS ||
>> + !rte_eventdevs[dev_id].attached) {
>> + rte_errno = EINVAL;
>> + return 0;
>> + }
>> +
>> + if (port_id >= dev->data->nb_ports) {
>> + rte_errno = EINVAL;
>> + return 0;
>> + }
>> +#endif
>> + rte_eventdev_trace_eth_tx_adapter_enqueue(dev_id, port_id, ev,
>> + nb_events, flags);
>> + if (flags)
>> + return dev->txa_enqueue_same_dest(dev->data->ports[port_id],
>> + ev, nb_events);
>> + else
>> + return dev->txa_enqueue(dev->data->ports[port_id], ev,
>> + nb_events);
>> +}
>> +
>> int
>> rte_event_eth_tx_adapter_stats_get(uint8_t id,
>> struct rte_event_eth_tx_adapter_stats *stats)
>> diff --git a/lib/eventdev/rte_event_eth_tx_adapter.h b/lib/eventdev/rte_event_eth_tx_adapter.h
>> index 8c59547165..3cd65e8a09 100644
>> --- a/lib/eventdev/rte_event_eth_tx_adapter.h
>> +++ b/lib/eventdev/rte_event_eth_tx_adapter.h
>> @@ -79,6 +79,7 @@ extern "C" {
>> #include <rte_mbuf.h>
>>
>> #include "rte_eventdev.h"
>> +#include "eventdev_pmd.h"
>>
>> /**
>> * Adapter configuration structure
>> @@ -348,36 +349,12 @@ rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id);
>> * one or more events. This error code is only applicable to
>> * closed systems.
>> */
>> -static inline uint16_t
>> +uint16_t
>> rte_event_eth_tx_adapter_enqueue(uint8_t dev_id,
>> - uint8_t port_id,
>> - struct rte_event ev[],
>> - uint16_t nb_events,
>> - const uint8_t flags)
>> -{
>> - const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> -
>> -#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>> - if (dev_id >= RTE_EVENT_MAX_DEVS ||
>> - !rte_eventdevs[dev_id].attached) {
>> - rte_errno = EINVAL;
>> - return 0;
>> - }
>> -
>> - if (port_id >= dev->data->nb_ports) {
>> - rte_errno = EINVAL;
>> - return 0;
>> - }
>> -#endif
>> - rte_eventdev_trace_eth_tx_adapter_enqueue(dev_id, port_id, ev,
>> - nb_events, flags);
>> - if (flags)
>> - return dev->txa_enqueue_same_dest(dev->data->ports[port_id],
>> - ev, nb_events);
>> - else
>> - return dev->txa_enqueue(dev->data->ports[port_id], ev,
>> - nb_events);
>> -}
>> + uint8_t port_id,
>> + struct rte_event ev[],
>> + uint16_t nb_events,
>> + const uint8_t flags);
>>
>> /**
>> * Retrieve statistics for an adapter
>> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
>> index 594dd5e759..e2dad8a838 100644
>> --- a/lib/eventdev/rte_eventdev.c
>> +++ b/lib/eventdev/rte_eventdev.c
>> @@ -1119,6 +1119,65 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
>> return count;
>> }
>>
>> +static __rte_always_inline uint16_t
>> +__rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>> + const struct rte_event ev[], uint16_t nb_events,
>> + const event_enqueue_burst_t fn)
>> +{
>> + const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> +
>> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>> + if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
>> + rte_errno = EINVAL;
>> + return 0;
>> + }
>> +
>> + if (port_id >= dev->data->nb_ports) {
>> + rte_errno = EINVAL;
>> + return 0;
>> + }
>> +#endif
>> + rte_eventdev_trace_enq_burst(dev_id, port_id, ev, nb_events, fn);
>> + /*
>> + * Allow zero cost non burst mode routine invocation if application
>> + * requests nb_events as const one
>> + */
>> + if (nb_events == 1)
>> + return (*dev->enqueue)(dev->data->ports[port_id], ev);
>> + else
>> + return fn(dev->data->ports[port_id], ev, nb_events);
>> +}
>> +
>> +uint16_t
>> +rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>> + const struct rte_event ev[], uint16_t nb_events)
>> +{
>> + const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> +
>> + return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
>> + dev->enqueue_burst);
>> +}
>> +
>> +uint16_t
>> +rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
>> + const struct rte_event ev[], uint16_t nb_events)
>> +{
>> + const struct rte_eventdev *dev = &rte_event_devices[dev_id];
>> +
>> + return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
>> + dev->enqueue_new_burst);
>> +}
>> +
>> +uint16_t
>> +rte_event_enqueue_forward_burst(uint8_t dev_id, uint8_t port_id,
>> + const struct rte_event ev[], uint16_t nb_events)
>> +{
>> + const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> +
>> + return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
>> + dev->enqueue_forward_burst);
>> +}
>> +
>> int
>> rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
>> uint64_t *timeout_ticks)
>> @@ -1135,6 +1194,29 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
>> return (*dev->dev_ops->timeout_ticks)(dev, ns, timeout_ticks);
>> }
>>
>> +uint16_t
>> +rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
>> + uint16_t nb_events, uint64_t timeout_ticks)
>> +{
>> + struct rte_eventdev *dev = &rte_event_devices[dev_id];
>> +
>> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>> + if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
>> + rte_errno = EINVAL;
>> + return 0;
>> + }
>> +
>> + if (port_id >= dev->data->nb_ports) {
>> + rte_errno = EINVAL;
>> + return 0;
>> + }
>> +#endif
>> + rte_eventdev_trace_deq_burst(dev_id, port_id, ev, nb_events);
>> +
>> + return (*dev->dequeue_burst)(dev->data->ports[port_id], ev, nb_events,
>> + timeout_ticks);
>> +}
>> +
>> int
>> rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id)
>> {
>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>> index a9c496fb62..451e9fb0a0 100644
>> --- a/lib/eventdev/rte_eventdev.h
>> +++ b/lib/eventdev/rte_eventdev.h
>> @@ -1445,38 +1445,6 @@ struct rte_eventdev {
>> void *reserved_ptrs[3]; /**< Reserved for future fields */
>> } __rte_cache_aligned;
>>
>> -extern struct rte_eventdev *rte_eventdevs;
>> -/** @internal The pool of rte_eventdev structures. */
>> -
>> -static __rte_always_inline uint16_t
>> -__rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>> - const struct rte_event ev[], uint16_t nb_events,
>> - const event_enqueue_burst_t fn)
>> -{
>> - const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> -
>> -#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>> - if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
>> - rte_errno = EINVAL;
>> - return 0;
>> - }
>> -
>> - if (port_id >= dev->data->nb_ports) {
>> - rte_errno = EINVAL;
>> - return 0;
>> - }
>> -#endif
>> - rte_eventdev_trace_enq_burst(dev_id, port_id, ev, nb_events, fn);
>> - /*
>> - * Allow zero cost non burst mode routine invocation if application
>> - * requests nb_events as const one
>> - */
>> - if (nb_events == 1)
>> - return (*dev->enqueue)(dev->data->ports[port_id], ev);
>> - else
>> - return fn(dev->data->ports[port_id], ev, nb_events);
>> -}
>> -
>> /**
>> * Enqueue a burst of events objects or an event object supplied in *rte_event*
>> * structure on an event device designated by its *dev_id* through the event
>> @@ -1520,15 +1488,9 @@ __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>> * closed systems.
>> * @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
>> */
>> -static inline uint16_t
>> +uint16_t
>> rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>> - const struct rte_event ev[], uint16_t nb_events)
>> -{
>> - const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> -
>> - return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
>> - dev->enqueue_burst);
>> -}
>> + const struct rte_event ev[], uint16_t nb_events);
>>
>> /**
>> * Enqueue a burst of events objects of operation type *RTE_EVENT_OP_NEW* on
>> @@ -1571,15 +1533,9 @@ rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>> * @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
>> * @see rte_event_enqueue_burst()
>> */
>> -static inline uint16_t
>> +uint16_t
>> rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
>> - const struct rte_event ev[], uint16_t nb_events)
>> -{
>> - const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> -
>> - return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
>> - dev->enqueue_new_burst);
>> -}
>> + const struct rte_event ev[], uint16_t nb_events);
>>
>> /**
>> * Enqueue a burst of events objects of operation type *RTE_EVENT_OP_FORWARD*
>> @@ -1622,15 +1578,10 @@ rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
>> * @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
>> * @see rte_event_enqueue_burst()
>> */
>> -static inline uint16_t
>> +uint16_t
>> rte_event_enqueue_forward_burst(uint8_t dev_id, uint8_t port_id,
>> - const struct rte_event ev[], uint16_t nb_events)
>> -{
>> - const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> -
>> - return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
>> - dev->enqueue_forward_burst);
>> -}
>> + const struct rte_event ev[],
>> + uint16_t nb_events);
>>
>> /**
>> * Converts nanoseconds to *timeout_ticks* value for rte_event_dequeue_burst()
>> @@ -1727,36 +1678,9 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
>> *
>> * @see rte_event_port_dequeue_depth()
>> */
>> -static inline uint16_t
>> +uint16_t
>> rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
>> - uint16_t nb_events, uint64_t timeout_ticks)
>> -{
>> - struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> -
>> -#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>> - if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
>> - rte_errno = EINVAL;
>> - return 0;
>> - }
>> -
>> - if (port_id >= dev->data->nb_ports) {
>> - rte_errno = EINVAL;
>> - return 0;
>> - }
>> -#endif
>> - rte_eventdev_trace_deq_burst(dev_id, port_id, ev, nb_events);
>> - /*
>> - * Allow zero cost non burst mode routine invocation if application
>> - * requests nb_events as const one
>> - */
>> - if (nb_events == 1)
>> - return (*dev->dequeue)(
>> - dev->data->ports[port_id], ev, timeout_ticks);
>> - else
>> - return (*dev->dequeue_burst)(
>> - dev->data->ports[port_id], ev, nb_events,
>> - timeout_ticks);
>> -}
>> + uint16_t nb_events, uint64_t timeout_ticks);
>>
>> /**
>> * Link multiple source event queues supplied in *queues* to the destination
>> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
>> index 88625621ec..8da79cbdc0 100644
>> --- a/lib/eventdev/version.map
>> +++ b/lib/eventdev/version.map
>> @@ -13,7 +13,11 @@ DPDK_22 {
>> rte_event_crypto_adapter_stats_get;
>> rte_event_crypto_adapter_stats_reset;
>> rte_event_crypto_adapter_stop;
>> + rte_event_enqueue_burst;
>> + rte_event_enqueue_new_burst;
>> + rte_event_enqueue_forward_burst;
>> rte_event_dequeue_timeout_ticks;
>> + rte_event_dequeue_burst;
>> rte_event_dev_attr_get;
>> rte_event_dev_close;
>> rte_event_dev_configure;
>> --
>> 2.17.1
>>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC] eventdev: uninline inline API functions
2021-08-30 16:00 2% ` [dpdk-dev] [RFC] eventdev: uninline inline API functions Mattias Rönnblom
@ 2021-08-31 12:28 0% ` Jerin Jacob
2021-08-31 12:34 0% ` Mattias Rönnblom
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-08-31 12:28 UTC (permalink / raw)
To: Mattias Rönnblom
Cc: Jerin Jacob, Pavan Nikhilesh, dpdk-dev, bogdan.tanasa
On Mon, Aug 30, 2021 at 9:30 PM Mattias Rönnblom
<mattias.ronnblom@ericsson.com> wrote:
>
> Replace the inline functions in the eventdev user application API with
> regular non-inline API calls. This allows for a cleaner and more
> simple API/ABI, but might well also cause performance regressions.
>
> The purpose of this RFC patch is to allow for performance testing.
>
> The rte_eventdev struct declaration should be moved off the public
> API.
>
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
I think we need to align all DPDK subsystems to a similar scheme.[1]
I see -5% kind of regression-based on workload.
[1]
https://patches.dpdk.org/project/dpdk/patch/20210820162834.12544-2-konstantin.ananyev@intel.com/
> ---
> drivers/net/octeontx/octeontx_ethdev.h | 1 +
> lib/eventdev/rte_event_eth_rx_adapter.h | 1 +
> lib/eventdev/rte_event_eth_tx_adapter.c | 31 ++++++++
> lib/eventdev/rte_event_eth_tx_adapter.h | 35 ++-------
> lib/eventdev/rte_eventdev.c | 82 +++++++++++++++++++++
> lib/eventdev/rte_eventdev.h | 94 +++----------------------
> lib/eventdev/version.map | 4 ++
> 7 files changed, 134 insertions(+), 114 deletions(-)
>
> diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
> index b73515de37..9402105fcf 100644
> --- a/drivers/net/octeontx/octeontx_ethdev.h
> +++ b/drivers/net/octeontx/octeontx_ethdev.h
> @@ -9,6 +9,7 @@
>
> #include <rte_common.h>
> #include <ethdev_driver.h>
> +#include <eventdev_pmd.h>
> #include <rte_eventdev.h>
> #include <rte_mempool.h>
> #include <rte_memory.h>
> diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h b/lib/eventdev/rte_event_eth_rx_adapter.h
> index 182dd2e5dd..79f4822fb0 100644
> --- a/lib/eventdev/rte_event_eth_rx_adapter.h
> +++ b/lib/eventdev/rte_event_eth_rx_adapter.h
> @@ -84,6 +84,7 @@ extern "C" {
> #include <rte_service.h>
>
> #include "rte_eventdev.h"
> +#include "eventdev_pmd.h"
>
> #define RTE_EVENT_ETH_RX_ADAPTER_MAX_INSTANCE 32
>
> diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
> index 18c0359db7..74f88e6147 100644
> --- a/lib/eventdev/rte_event_eth_tx_adapter.c
> +++ b/lib/eventdev/rte_event_eth_tx_adapter.c
> @@ -1154,6 +1154,37 @@ rte_event_eth_tx_adapter_start(uint8_t id)
> return ret;
> }
>
> +uint16_t
> +rte_event_eth_tx_adapter_enqueue(uint8_t dev_id,
> + uint8_t port_id,
> + struct rte_event ev[],
> + uint16_t nb_events,
> + const uint8_t flags)
> +{
> + const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> +
> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> + if (dev_id >= RTE_EVENT_MAX_DEVS ||
> + !rte_eventdevs[dev_id].attached) {
> + rte_errno = EINVAL;
> + return 0;
> + }
> +
> + if (port_id >= dev->data->nb_ports) {
> + rte_errno = EINVAL;
> + return 0;
> + }
> +#endif
> + rte_eventdev_trace_eth_tx_adapter_enqueue(dev_id, port_id, ev,
> + nb_events, flags);
> + if (flags)
> + return dev->txa_enqueue_same_dest(dev->data->ports[port_id],
> + ev, nb_events);
> + else
> + return dev->txa_enqueue(dev->data->ports[port_id], ev,
> + nb_events);
> +}
> +
> int
> rte_event_eth_tx_adapter_stats_get(uint8_t id,
> struct rte_event_eth_tx_adapter_stats *stats)
> diff --git a/lib/eventdev/rte_event_eth_tx_adapter.h b/lib/eventdev/rte_event_eth_tx_adapter.h
> index 8c59547165..3cd65e8a09 100644
> --- a/lib/eventdev/rte_event_eth_tx_adapter.h
> +++ b/lib/eventdev/rte_event_eth_tx_adapter.h
> @@ -79,6 +79,7 @@ extern "C" {
> #include <rte_mbuf.h>
>
> #include "rte_eventdev.h"
> +#include "eventdev_pmd.h"
>
> /**
> * Adapter configuration structure
> @@ -348,36 +349,12 @@ rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id);
> * one or more events. This error code is only applicable to
> * closed systems.
> */
> -static inline uint16_t
> +uint16_t
> rte_event_eth_tx_adapter_enqueue(uint8_t dev_id,
> - uint8_t port_id,
> - struct rte_event ev[],
> - uint16_t nb_events,
> - const uint8_t flags)
> -{
> - const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> -
> -#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> - if (dev_id >= RTE_EVENT_MAX_DEVS ||
> - !rte_eventdevs[dev_id].attached) {
> - rte_errno = EINVAL;
> - return 0;
> - }
> -
> - if (port_id >= dev->data->nb_ports) {
> - rte_errno = EINVAL;
> - return 0;
> - }
> -#endif
> - rte_eventdev_trace_eth_tx_adapter_enqueue(dev_id, port_id, ev,
> - nb_events, flags);
> - if (flags)
> - return dev->txa_enqueue_same_dest(dev->data->ports[port_id],
> - ev, nb_events);
> - else
> - return dev->txa_enqueue(dev->data->ports[port_id], ev,
> - nb_events);
> -}
> + uint8_t port_id,
> + struct rte_event ev[],
> + uint16_t nb_events,
> + const uint8_t flags);
>
> /**
> * Retrieve statistics for an adapter
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index 594dd5e759..e2dad8a838 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -1119,6 +1119,65 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
> return count;
> }
>
> +static __rte_always_inline uint16_t
> +__rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
> + const struct rte_event ev[], uint16_t nb_events,
> + const event_enqueue_burst_t fn)
> +{
> + const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> +
> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> + if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
> + rte_errno = EINVAL;
> + return 0;
> + }
> +
> + if (port_id >= dev->data->nb_ports) {
> + rte_errno = EINVAL;
> + return 0;
> + }
> +#endif
> + rte_eventdev_trace_enq_burst(dev_id, port_id, ev, nb_events, fn);
> + /*
> + * Allow zero cost non burst mode routine invocation if application
> + * requests nb_events as const one
> + */
> + if (nb_events == 1)
> + return (*dev->enqueue)(dev->data->ports[port_id], ev);
> + else
> + return fn(dev->data->ports[port_id], ev, nb_events);
> +}
> +
> +uint16_t
> +rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
> + const struct rte_event ev[], uint16_t nb_events)
> +{
> + const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> +
> + return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
> + dev->enqueue_burst);
> +}
> +
> +uint16_t
> +rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
> + const struct rte_event ev[], uint16_t nb_events)
> +{
> + const struct rte_eventdev *dev = &rte_event_devices[dev_id];
> +
> + return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
> + dev->enqueue_new_burst);
> +}
> +
> +uint16_t
> +rte_event_enqueue_forward_burst(uint8_t dev_id, uint8_t port_id,
> + const struct rte_event ev[], uint16_t nb_events)
> +{
> + const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> +
> + return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
> + dev->enqueue_forward_burst);
> +}
> +
> int
> rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
> uint64_t *timeout_ticks)
> @@ -1135,6 +1194,29 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
> return (*dev->dev_ops->timeout_ticks)(dev, ns, timeout_ticks);
> }
>
> +uint16_t
> +rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
> + uint16_t nb_events, uint64_t timeout_ticks)
> +{
> + struct rte_eventdev *dev = &rte_event_devices[dev_id];
> +
> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> + if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
> + rte_errno = EINVAL;
> + return 0;
> + }
> +
> + if (port_id >= dev->data->nb_ports) {
> + rte_errno = EINVAL;
> + return 0;
> + }
> +#endif
> + rte_eventdev_trace_deq_burst(dev_id, port_id, ev, nb_events);
> +
> + return (*dev->dequeue_burst)(dev->data->ports[port_id], ev, nb_events,
> + timeout_ticks);
> +}
> +
> int
> rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id)
> {
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index a9c496fb62..451e9fb0a0 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -1445,38 +1445,6 @@ struct rte_eventdev {
> void *reserved_ptrs[3]; /**< Reserved for future fields */
> } __rte_cache_aligned;
>
> -extern struct rte_eventdev *rte_eventdevs;
> -/** @internal The pool of rte_eventdev structures. */
> -
> -static __rte_always_inline uint16_t
> -__rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
> - const struct rte_event ev[], uint16_t nb_events,
> - const event_enqueue_burst_t fn)
> -{
> - const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> -
> -#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> - if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
> - rte_errno = EINVAL;
> - return 0;
> - }
> -
> - if (port_id >= dev->data->nb_ports) {
> - rte_errno = EINVAL;
> - return 0;
> - }
> -#endif
> - rte_eventdev_trace_enq_burst(dev_id, port_id, ev, nb_events, fn);
> - /*
> - * Allow zero cost non burst mode routine invocation if application
> - * requests nb_events as const one
> - */
> - if (nb_events == 1)
> - return (*dev->enqueue)(dev->data->ports[port_id], ev);
> - else
> - return fn(dev->data->ports[port_id], ev, nb_events);
> -}
> -
> /**
> * Enqueue a burst of events objects or an event object supplied in *rte_event*
> * structure on an event device designated by its *dev_id* through the event
> @@ -1520,15 +1488,9 @@ __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
> * closed systems.
> * @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
> */
> -static inline uint16_t
> +uint16_t
> rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
> - const struct rte_event ev[], uint16_t nb_events)
> -{
> - const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> -
> - return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
> - dev->enqueue_burst);
> -}
> + const struct rte_event ev[], uint16_t nb_events);
>
> /**
> * Enqueue a burst of events objects of operation type *RTE_EVENT_OP_NEW* on
> @@ -1571,15 +1533,9 @@ rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
> * @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
> * @see rte_event_enqueue_burst()
> */
> -static inline uint16_t
> +uint16_t
> rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
> - const struct rte_event ev[], uint16_t nb_events)
> -{
> - const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> -
> - return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
> - dev->enqueue_new_burst);
> -}
> + const struct rte_event ev[], uint16_t nb_events);
>
> /**
> * Enqueue a burst of events objects of operation type *RTE_EVENT_OP_FORWARD*
> @@ -1622,15 +1578,10 @@ rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
> * @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
> * @see rte_event_enqueue_burst()
> */
> -static inline uint16_t
> +uint16_t
> rte_event_enqueue_forward_burst(uint8_t dev_id, uint8_t port_id,
> - const struct rte_event ev[], uint16_t nb_events)
> -{
> - const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> -
> - return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
> - dev->enqueue_forward_burst);
> -}
> + const struct rte_event ev[],
> + uint16_t nb_events);
>
> /**
> * Converts nanoseconds to *timeout_ticks* value for rte_event_dequeue_burst()
> @@ -1727,36 +1678,9 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
> *
> * @see rte_event_port_dequeue_depth()
> */
> -static inline uint16_t
> +uint16_t
> rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
> - uint16_t nb_events, uint64_t timeout_ticks)
> -{
> - struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> -
> -#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> - if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
> - rte_errno = EINVAL;
> - return 0;
> - }
> -
> - if (port_id >= dev->data->nb_ports) {
> - rte_errno = EINVAL;
> - return 0;
> - }
> -#endif
> - rte_eventdev_trace_deq_burst(dev_id, port_id, ev, nb_events);
> - /*
> - * Allow zero cost non burst mode routine invocation if application
> - * requests nb_events as const one
> - */
> - if (nb_events == 1)
> - return (*dev->dequeue)(
> - dev->data->ports[port_id], ev, timeout_ticks);
> - else
> - return (*dev->dequeue_burst)(
> - dev->data->ports[port_id], ev, nb_events,
> - timeout_ticks);
> -}
> + uint16_t nb_events, uint64_t timeout_ticks);
>
> /**
> * Link multiple source event queues supplied in *queues* to the destination
> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
> index 88625621ec..8da79cbdc0 100644
> --- a/lib/eventdev/version.map
> +++ b/lib/eventdev/version.map
> @@ -13,7 +13,11 @@ DPDK_22 {
> rte_event_crypto_adapter_stats_get;
> rte_event_crypto_adapter_stats_reset;
> rte_event_crypto_adapter_stop;
> + rte_event_enqueue_burst;
> + rte_event_enqueue_new_burst;
> + rte_event_enqueue_forward_burst;
> rte_event_dequeue_timeout_ticks;
> + rte_event_dequeue_burst;
> rte_event_dev_attr_get;
> rte_event_dev_close;
> rte_event_dev_configure;
> --
> 2.17.1
>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v3] eventdev: update crypto adapter metadata structures
2021-08-31 6:08 3% ` Akhil Goyal
@ 2021-08-31 7:56 6% ` Shijith Thotton
2021-09-07 8:34 0% ` Jerin Jacob
2021-09-07 8:53 0% ` Gujjar, Abhinandan S
1 sibling, 2 replies; 200+ results
From: Shijith Thotton @ 2021-08-31 7:56 UTC (permalink / raw)
To: dev
Cc: Shijith Thotton, jerinj, anoobj, pbhagavatula, gakhil,
Abhinandan Gujjar, Ray Kinsella, Ankur Dwivedi
In crypto adapter metadata, reserved bytes in request info structure is
a space holder for response info. It enforces an order of operation if
the structures are updated using memcpy to avoid overwriting response
info. It is logical to move the reserved space out of request info. It
also solves the ordering issue mentioned before.
This patch removes the reserve field from request info and makes event
crypto metadata type to structure from union to make space for response
info.
App and drivers are updated as per metadata change.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
---
v3:
* Updated ABI section of release notes.
v2:
* Updated deprecation notice.
v1:
* Rebased.
app/test/test_event_crypto_adapter.c | 14 +++++++-------
doc/guides/rel_notes/deprecation.rst | 6 ------
doc/guides/rel_notes/release_21_11.rst | 2 ++
drivers/crypto/octeontx/otx_cryptodev_ops.c | 8 ++++----
drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 4 ++--
.../event/octeontx2/otx2_evdev_crypto_adptr_tx.h | 4 ++--
lib/eventdev/rte_event_crypto_adapter.c | 8 ++++----
lib/eventdev/rte_event_crypto_adapter.h | 15 +++++----------
8 files changed, 26 insertions(+), 35 deletions(-)
diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c
index 3ad20921e2..0d73694d3a 100644
--- a/app/test/test_event_crypto_adapter.c
+++ b/app/test/test_event_crypto_adapter.c
@@ -168,7 +168,7 @@ test_op_forward_mode(uint8_t session_less)
{
struct rte_crypto_sym_xform cipher_xform;
struct rte_cryptodev_sym_session *sess;
- union rte_event_crypto_metadata m_data;
+ struct rte_event_crypto_metadata m_data;
struct rte_crypto_sym_op *sym_op;
struct rte_crypto_op *op;
struct rte_mbuf *m;
@@ -368,7 +368,7 @@ test_op_new_mode(uint8_t session_less)
{
struct rte_crypto_sym_xform cipher_xform;
struct rte_cryptodev_sym_session *sess;
- union rte_event_crypto_metadata m_data;
+ struct rte_event_crypto_metadata m_data;
struct rte_crypto_sym_op *sym_op;
struct rte_crypto_op *op;
struct rte_mbuf *m;
@@ -406,7 +406,7 @@ test_op_new_mode(uint8_t session_less)
if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA) {
/* Fill in private user data information */
rte_memcpy(&m_data.response_info, &response_info,
- sizeof(m_data));
+ sizeof(response_info));
rte_cryptodev_sym_session_set_user_data(sess,
&m_data, sizeof(m_data));
}
@@ -426,7 +426,7 @@ test_op_new_mode(uint8_t session_less)
op->private_data_offset = len;
/* Fill in private data information */
rte_memcpy(&m_data.response_info, &response_info,
- sizeof(m_data));
+ sizeof(response_info));
rte_memcpy((uint8_t *)op + len, &m_data, sizeof(m_data));
}
@@ -519,7 +519,7 @@ configure_cryptodev(void)
DEFAULT_NUM_XFORMS *
sizeof(struct rte_crypto_sym_xform) +
MAXIMUM_IV_LENGTH +
- sizeof(union rte_event_crypto_metadata),
+ sizeof(struct rte_event_crypto_metadata),
rte_socket_id());
if (params.op_mpool == NULL) {
RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
@@ -549,12 +549,12 @@ configure_cryptodev(void)
* to include the session headers & private data
*/
session_size = rte_cryptodev_sym_get_private_session_size(TEST_CDEV_ID);
- session_size += sizeof(union rte_event_crypto_metadata);
+ session_size += sizeof(struct rte_event_crypto_metadata);
params.session_mpool = rte_cryptodev_sym_session_pool_create(
"CRYPTO_ADAPTER_SESSION_MP",
MAX_NB_SESSIONS, 0, 0,
- sizeof(union rte_event_crypto_metadata),
+ sizeof(struct rte_event_crypto_metadata),
SOCKET_ID_ANY);
TEST_ASSERT_NOT_NULL(params.session_mpool,
"session mempool allocation failed\n");
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b..58ee95c020 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -266,12 +266,6 @@ Deprecation Notices
values to the function ``rte_event_eth_rx_adapter_queue_add`` using
the structure ``rte_event_eth_rx_adapter_queue_add``.
-* eventdev: Reserved bytes of ``rte_event_crypto_request`` is a space holder
- for ``response_info``. Both should be decoupled for better clarity.
- New space for ``response_info`` can be made by changing
- ``rte_event_crypto_metadata`` type to structure from union.
- This change is targeted for DPDK 21.11.
-
* metrics: The function ``rte_metrics_init`` will have a non-void return
in order to notify errors instead of calling ``rte_exit``.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d707a554ef..ab76d5dd55 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -100,6 +100,8 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* eventdev: Modified type of ``union rte_event_crypto_metadata`` to struct and
+ removed reserved bytes from ``struct rte_event_crypto_request``.
Known Issues
------------
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
index eac6796cfb..c51be63146 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
@@ -710,17 +710,17 @@ submit_request_to_sso(struct ssows *ws, uintptr_t req,
ssovf_store_pair(add_work, req, ws->grps[rsp_info->queue_id]);
}
-static inline union rte_event_crypto_metadata *
+static inline struct rte_event_crypto_metadata *
get_event_crypto_mdata(struct rte_crypto_op *op)
{
- union rte_event_crypto_metadata *ec_mdata;
+ struct rte_event_crypto_metadata *ec_mdata;
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
ec_mdata = rte_cryptodev_sym_session_get_user_data(
op->sym->session);
else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
op->private_data_offset)
- ec_mdata = (union rte_event_crypto_metadata *)
+ ec_mdata = (struct rte_event_crypto_metadata *)
((uint8_t *)op + op->private_data_offset);
else
return NULL;
@@ -731,7 +731,7 @@ get_event_crypto_mdata(struct rte_crypto_op *op)
uint16_t __rte_hot
otx_crypto_adapter_enqueue(void *port, struct rte_crypto_op *op)
{
- union rte_event_crypto_metadata *ec_mdata;
+ struct rte_event_crypto_metadata *ec_mdata;
struct cpt_instance *instance;
struct cpt_request_info *req;
struct rte_event *rsp_info;
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 42100154cd..952d1352f4 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -453,7 +453,7 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp,
struct rte_crypto_op *op,
uint64_t cpt_inst_w7)
{
- union rte_event_crypto_metadata *m_data;
+ struct rte_event_crypto_metadata *m_data;
union cpt_inst_s inst;
uint64_t lmt_status;
@@ -468,7 +468,7 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp,
}
} else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
op->private_data_offset) {
- m_data = (union rte_event_crypto_metadata *)
+ m_data = (struct rte_event_crypto_metadata *)
((uint8_t *)op +
op->private_data_offset);
} else {
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
index ecf7eb9f56..458e8306d7 100644
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
+++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
@@ -16,7 +16,7 @@
static inline uint16_t
otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev)
{
- union rte_event_crypto_metadata *m_data;
+ struct rte_event_crypto_metadata *m_data;
struct rte_crypto_op *crypto_op;
struct rte_cryptodev *cdev;
struct otx2_cpt_qp *qp;
@@ -37,7 +37,7 @@ otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev)
qp_id = m_data->request_info.queue_pair_id;
} else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
crypto_op->private_data_offset) {
- m_data = (union rte_event_crypto_metadata *)
+ m_data = (struct rte_event_crypto_metadata *)
((uint8_t *)crypto_op +
crypto_op->private_data_offset);
cdev_id = m_data->request_info.cdev_id;
diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c
index e1d38d383d..6977391ae9 100644
--- a/lib/eventdev/rte_event_crypto_adapter.c
+++ b/lib/eventdev/rte_event_crypto_adapter.c
@@ -333,7 +333,7 @@ eca_enq_to_cryptodev(struct rte_event_crypto_adapter *adapter,
struct rte_event *ev, unsigned int cnt)
{
struct rte_event_crypto_adapter_stats *stats = &adapter->crypto_stats;
- union rte_event_crypto_metadata *m_data = NULL;
+ struct rte_event_crypto_metadata *m_data = NULL;
struct crypto_queue_pair_info *qp_info = NULL;
struct rte_crypto_op *crypto_op;
unsigned int i, n;
@@ -371,7 +371,7 @@ eca_enq_to_cryptodev(struct rte_event_crypto_adapter *adapter,
len++;
} else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
crypto_op->private_data_offset) {
- m_data = (union rte_event_crypto_metadata *)
+ m_data = (struct rte_event_crypto_metadata *)
((uint8_t *)crypto_op +
crypto_op->private_data_offset);
cdev_id = m_data->request_info.cdev_id;
@@ -504,7 +504,7 @@ eca_ops_enqueue_burst(struct rte_event_crypto_adapter *adapter,
struct rte_crypto_op **ops, uint16_t num)
{
struct rte_event_crypto_adapter_stats *stats = &adapter->crypto_stats;
- union rte_event_crypto_metadata *m_data = NULL;
+ struct rte_event_crypto_metadata *m_data = NULL;
uint8_t event_dev_id = adapter->eventdev_id;
uint8_t event_port_id = adapter->event_port_id;
struct rte_event events[BATCH_SIZE];
@@ -523,7 +523,7 @@ eca_ops_enqueue_burst(struct rte_event_crypto_adapter *adapter,
ops[i]->sym->session);
} else if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
ops[i]->private_data_offset) {
- m_data = (union rte_event_crypto_metadata *)
+ m_data = (struct rte_event_crypto_metadata *)
((uint8_t *)ops[i] +
ops[i]->private_data_offset);
}
diff --git a/lib/eventdev/rte_event_crypto_adapter.h b/lib/eventdev/rte_event_crypto_adapter.h
index f8c6cca87c..3c24d9d9df 100644
--- a/lib/eventdev/rte_event_crypto_adapter.h
+++ b/lib/eventdev/rte_event_crypto_adapter.h
@@ -200,11 +200,6 @@ enum rte_event_crypto_adapter_mode {
* provide event request information to the adapter.
*/
struct rte_event_crypto_request {
- uint8_t resv[8];
- /**< Overlaps with first 8 bytes of struct rte_event
- * that encode the response event information. Application
- * is expected to fill in struct rte_event response_info.
- */
uint16_t cdev_id;
/**< cryptodev ID to be used */
uint16_t queue_pair_id;
@@ -223,16 +218,16 @@ struct rte_event_crypto_request {
* operation. If the transfer is done by SW, event response information
* will be used by the adapter.
*/
-union rte_event_crypto_metadata {
- struct rte_event_crypto_request request_info;
- /**< Request information to be filled in by application
- * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
- */
+struct rte_event_crypto_metadata {
struct rte_event response_info;
/**< Response information to be filled in by application
* for RTE_EVENT_CRYPTO_ADAPTER_OP_NEW and
* RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
*/
+ struct rte_event_crypto_request request_info;
+ /**< Request information to be filled in by application
+ * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
+ */
};
/**
--
2.25.1
^ permalink raw reply [relevance 6%]
* Re: [dpdk-dev] [PATCH v2] eventdev: update crypto adapter metadata structures
2021-08-31 6:08 3% ` Akhil Goyal
@ 2021-08-31 6:51 0% ` Shijith Thotton
0 siblings, 0 replies; 200+ results
From: Shijith Thotton @ 2021-08-31 6:51 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: Jerin Jacob Kollanukkaran, Anoob Joseph,
Pavan Nikhilesh Bhagavatula, Abhinandan Gujjar, Ray Kinsella,
Ankur Dwivedi
>
>> In crypto adapter metadata, reserved bytes in request info structure is
>> a space holder for response info. It enforces an order of operation if
>> the structures are updated using memcpy to avoid overwriting response
>> info. It is logical to move the reserved space out of request info. It
>> also solves the ordering issue mentioned before.
>>
>> This patch removes the reserve field from request info and makes event
>> crypto metadata type to structure from union to make space for response
>> info.
>>
>> App and drivers are updated as per metadata change.
>>
>> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
>> ---
>> v2:
>> * Updated deprecation notice.
>>
>Please also update release notes API/ABI section for the changes introduced in
>this patch.
I will send v3 with the changes.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] eventdev: update crypto adapter metadata structures
@ 2021-08-31 6:08 3% ` Akhil Goyal
2021-08-31 6:51 0% ` Shijith Thotton
2021-08-31 7:56 6% ` [dpdk-dev] [PATCH v3] " Shijith Thotton
1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2021-08-31 6:08 UTC (permalink / raw)
To: Shijith Thotton, dev
Cc: Shijith Thotton, Jerin Jacob Kollanukkaran, Anoob Joseph,
Pavan Nikhilesh Bhagavatula, Abhinandan Gujjar, Ray Kinsella,
Ankur Dwivedi
> In crypto adapter metadata, reserved bytes in request info structure is
> a space holder for response info. It enforces an order of operation if
> the structures are updated using memcpy to avoid overwriting response
> info. It is logical to move the reserved space out of request info. It
> also solves the ordering issue mentioned before.
>
> This patch removes the reserve field from request info and makes event
> crypto metadata type to structure from union to make space for response
> info.
>
> App and drivers are updated as per metadata change.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> ---
> v2:
> * Updated deprecation notice.
>
Please also update release notes API/ABI section for the changes introduced in this patch.
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v3] ethdev: add namespace
2021-08-27 1:19 1% ` [dpdk-dev] [PATCH v2] " Ferruh Yigit
@ 2021-08-30 17:19 1% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-08-30 17:19 UTC (permalink / raw)
To: Maryam Tahhan, Reshma Pattan, Jerin Jacob, Wisam Jaddo,
Cristian Dumitrescu, Xiaoyun Li, Thomas Monjalon,
Andrew Rybchenko, Jay Jayatheerthan, Chas Williams,
Min Hu (Connor),
Pavan Nikhilesh, Shijith Thotton, Ajit Khaparde, Somnath Kotur,
John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, Haiyue Wang,
Beilei Xing, Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko,
Keith Wiles, Jiayu Hu, Olivier Matz, Ori Kam, Akhil Goyal,
Declan Doherty, Ray Kinsella, Radu Nicolau, Hemant Agrawal,
Sachin Saxena, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, John W. Linville, Ciara Loftus,
Shepard Siegel, Ed Czeck, John Miller, Igor Russkikh,
Steven Webster, Matt Peters, Somalapuram Amaranath, Rasesh Mody,
Shahed Shaikh, Bruce Richardson, Konstantin Ananyev,
Ruifeng Wang, Rahul Lakkireddy, Marcin Wojtas, Michal Krawczyk,
Shai Brandes, Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh,
Gaetan Rivet, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Jingjing Wu, Qiming Yang, Andrew Boyer,
Rosen Xu, Srisivasubramanian Srinivasan, Jakub Grajciar,
Zyta Szpak, Liron Himi, Stephen Hemminger, Long Li,
Martin Spinler, Heinrich Kuhn, Jiawen Wu, Tetsuya Mukawa,
Harman Kalra, Anoob Joseph, Nalla Pradeep,
Radha Mohan Chintakuntla, Veerasenareddy Burru,
Devendra Singh Rawat, Jasvinder Singh, Maciej Czekaj, Jian Wang,
Maxime Coquelin, Chenbo Xia, Yong Wang, Nicolas Chautru,
David Hunt, Harry van Haaren, Bernard Iremonger, Anatoly Burakov,
John McNamara, Kirill Rybalchenko, Byron Marohn, Yipeng Wang
Cc: Ferruh Yigit, dev, Tyler Retzlaff, David Marchand
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.
All internal components switched to using new names.
Syntax fixed on lines that this patch touches.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
Cc: David Marchand <david.marchand@redhat.com>
v2:
* Updated internal components
* Removed deprecation notice
v3:
* Updated missing macros / structs that David highlighted
* Added release notes update
---
app/proc-info/main.c | 8 +-
app/test-eventdev/test_perf_common.c | 4 +-
app/test-eventdev/test_pipeline_common.c | 12 +-
app/test-flow-perf/config.h | 2 +-
app/test-pipeline/init.c | 8 +-
app/test-pmd/cmdline.c | 298 +++---
app/test-pmd/config.c | 202 ++--
app/test-pmd/csumonly.c | 28 +-
app/test-pmd/flowgen.c | 6 +-
app/test-pmd/macfwd.c | 6 +-
app/test-pmd/macswap_common.h | 6 +-
app/test-pmd/parameters.c | 54 +-
app/test-pmd/testpmd.c | 60 +-
app/test-pmd/testpmd.h | 2 +-
app/test-pmd/txonly.c | 6 +-
app/test/test_ethdev_link.c | 68 +-
app/test/test_event_eth_rx_adapter.c | 4 +-
app/test/test_kni.c | 2 +-
app/test/test_link_bonding.c | 4 +-
app/test/test_link_bonding_mode4.c | 4 +-
| 28 +-
app/test/test_pmd_perf.c | 12 +-
app/test/virtual_pmd.c | 10 +-
doc/guides/eventdevs/cnxk.rst | 2 +-
doc/guides/eventdevs/octeontx2.rst | 2 +-
doc/guides/howto/debug_troubleshoot.rst | 2 +-
doc/guides/nics/bnxt.rst | 26 +-
doc/guides/nics/enic.rst | 2 +-
doc/guides/nics/features.rst | 116 +-
doc/guides/nics/fm10k.rst | 6 +-
doc/guides/nics/intel_vf.rst | 10 +-
doc/guides/nics/ixgbe.rst | 12 +-
doc/guides/nics/mlx5.rst | 4 +-
doc/guides/nics/tap.rst | 2 +-
.../generic_segmentation_offload_lib.rst | 8 +-
doc/guides/prog_guide/mbuf_lib.rst | 18 +-
doc/guides/prog_guide/poll_mode_drv.rst | 8 +-
doc/guides/prog_guide/rte_flow.rst | 34 +-
doc/guides/prog_guide/rte_security.rst | 2 +-
doc/guides/rel_notes/deprecation.rst | 12 +-
doc/guides/rel_notes/release_21_11.rst | 3 +
doc/guides/sample_app_ug/ipsec_secgw.rst | 4 +-
doc/guides/testpmd_app_ug/run_app.rst | 2 +-
drivers/bus/dpaa/include/process.h | 16 +-
drivers/common/cnxk/roc_npc.h | 2 +-
drivers/net/af_packet/rte_eth_af_packet.c | 16 +-
drivers/net/af_xdp/rte_eth_af_xdp.c | 12 +-
drivers/net/ark/ark_ethdev.c | 16 +-
drivers/net/atlantic/atl_ethdev.c | 90 +-
drivers/net/atlantic/atl_ethdev.h | 18 +-
drivers/net/atlantic/atl_rxtx.c | 6 +-
drivers/net/avp/avp_ethdev.c | 26 +-
drivers/net/axgbe/axgbe_dev.c | 6 +-
drivers/net/axgbe/axgbe_ethdev.c | 110 +-
drivers/net/axgbe/axgbe_ethdev.h | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 2 +-
drivers/net/axgbe/axgbe_rxtx.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 16 +-
drivers/net/bnxt/bnxt.h | 68 +-
drivers/net/bnxt/bnxt_ethdev.c | 178 ++--
drivers/net/bnxt/bnxt_flow.c | 4 +-
drivers/net/bnxt/bnxt_hwrm.c | 112 +-
drivers/net/bnxt/bnxt_reps.c | 2 +-
drivers/net/bnxt/bnxt_ring.c | 4 +-
drivers/net/bnxt/bnxt_rxq.c | 28 +-
drivers/net/bnxt/bnxt_rxr.c | 4 +-
drivers/net/bnxt/bnxt_rxtx_vec_avx2.c | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_common.h | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_neon.c | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_sse.c | 2 +-
drivers/net/bnxt/bnxt_txr.c | 4 +-
drivers/net/bnxt/bnxt_vnic.c | 30 +-
drivers/net/bnxt/rte_pmd_bnxt.c | 8 +-
drivers/net/bonding/eth_bond_private.h | 4 +-
drivers/net/bonding/rte_eth_bond_8023ad.c | 16 +-
drivers/net/bonding/rte_eth_bond_api.c | 6 +-
drivers/net/bonding/rte_eth_bond_pmd.c | 56 +-
drivers/net/cnxk/cn10k_ethdev.c | 38 +-
drivers/net/cnxk/cn10k_rx.c | 4 +-
drivers/net/cnxk/cn10k_tx.c | 4 +-
drivers/net/cnxk/cn9k_ethdev.c | 56 +-
drivers/net/cnxk/cn9k_rx.c | 4 +-
drivers/net/cnxk/cn9k_tx.c | 4 +-
drivers/net/cnxk/cnxk_ethdev.c | 84 +-
drivers/net/cnxk/cnxk_ethdev.h | 49 +-
drivers/net/cnxk/cnxk_ethdev_devargs.c | 6 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 112 +-
drivers/net/cnxk/cnxk_link.c | 14 +-
drivers/net/cnxk/cnxk_ptp.c | 4 +-
drivers/net/cnxk/cnxk_rte_flow.c | 2 +-
drivers/net/cxgbe/cxgbe.h | 48 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 50 +-
drivers/net/cxgbe/cxgbe_main.c | 12 +-
drivers/net/cxgbe/sge.c | 2 +-
drivers/net/dpaa/dpaa_ethdev.c | 190 ++--
drivers/net/dpaa/dpaa_ethdev.h | 10 +-
drivers/net/dpaa/dpaa_flow.c | 32 +-
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 34 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 148 +--
drivers/net/dpaa2/dpaa2_ethdev.h | 12 +-
drivers/net/dpaa2/dpaa2_rxtx.c | 8 +-
drivers/net/e1000/e1000_ethdev.h | 18 +-
drivers/net/e1000/em_ethdev.c | 68 +-
drivers/net/e1000/em_rxtx.c | 48 +-
drivers/net/e1000/igb_ethdev.c | 166 +--
drivers/net/e1000/igb_pf.c | 2 +-
drivers/net/e1000/igb_rxtx.c | 120 +--
drivers/net/ena/ena_ethdev.c | 70 +-
drivers/net/ena/ena_ethdev.h | 4 +-
| 76 +-
drivers/net/enetc/enetc_ethdev.c | 38 +-
drivers/net/enic/enic.h | 2 +-
drivers/net/enic/enic_ethdev.c | 88 +-
drivers/net/enic/enic_main.c | 40 +-
drivers/net/enic/enic_res.c | 52 +-
drivers/net/failsafe/failsafe.c | 8 +-
drivers/net/failsafe/failsafe_intr.c | 4 +-
drivers/net/failsafe/failsafe_ops.c | 82 +-
drivers/net/fm10k/fm10k.h | 4 +-
drivers/net/fm10k/fm10k_ethdev.c | 148 +--
drivers/net/fm10k/fm10k_rxtx_vec.c | 6 +-
drivers/net/hinic/base/hinic_pmd_hwdev.c | 22 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 142 +--
drivers/net/hinic/hinic_pmd_rx.c | 36 +-
drivers/net/hinic/hinic_pmd_rx.h | 22 +-
drivers/net/hns3/hns3_dcb.c | 14 +-
drivers/net/hns3/hns3_ethdev.c | 360 +++----
drivers/net/hns3/hns3_ethdev.h | 12 +-
drivers/net/hns3/hns3_ethdev_vf.c | 108 +-
drivers/net/hns3/hns3_flow.c | 6 +-
drivers/net/hns3/hns3_ptp.c | 2 +-
| 108 +-
| 28 +-
drivers/net/hns3/hns3_rxtx.c | 30 +-
drivers/net/hns3/hns3_rxtx.h | 2 +-
drivers/net/hns3/hns3_rxtx_vec.c | 10 +-
drivers/net/i40e/i40e_ethdev.c | 278 ++---
drivers/net/i40e/i40e_ethdev.h | 24 +-
drivers/net/i40e/i40e_ethdev_vf.c | 118 +--
drivers/net/i40e/i40e_flow.c | 2 +-
drivers/net/i40e/i40e_hash.c | 156 +--
drivers/net/i40e/i40e_pf.c | 14 +-
drivers/net/i40e/i40e_rxtx.c | 10 +-
drivers/net/i40e/i40e_rxtx.h | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_common.h | 8 +-
drivers/net/i40e/i40e_vf_representor.c | 48 +-
drivers/net/iavf/iavf.h | 24 +-
drivers/net/iavf/iavf_ethdev.c | 186 ++--
drivers/net/iavf/iavf_hash.c | 300 +++---
drivers/net/iavf/iavf_rxtx.c | 2 +-
drivers/net/iavf/iavf_rxtx.h | 24 +-
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 4 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 6 +-
drivers/net/iavf/iavf_rxtx_vec_sse.c | 2 +-
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf_ethdev.c | 90 +-
drivers/net/ice/ice_dcf_vf_representor.c | 58 +-
drivers/net/ice/ice_ethdev.c | 190 ++--
drivers/net/ice/ice_ethdev.h | 26 +-
drivers/net/ice/ice_hash.c | 268 ++---
drivers/net/ice/ice_rxtx.c | 8 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 2 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 4 +-
drivers/net/ice/ice_rxtx_vec_common.h | 26 +-
drivers/net/ice/ice_rxtx_vec_sse.c | 2 +-
drivers/net/igc/igc_ethdev.c | 146 +--
drivers/net/igc/igc_ethdev.h | 56 +-
drivers/net/igc/igc_txrx.c | 50 +-
drivers/net/ionic/ionic_ethdev.c | 140 +--
drivers/net/ionic/ionic_ethdev.h | 12 +-
drivers/net/ionic/ionic_lif.c | 36 +-
drivers/net/ionic/ionic_rxtx.c | 10 +-
drivers/net/ipn3ke/ipn3ke_representor.c | 70 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 313 +++---
drivers/net/ixgbe/ixgbe_ethdev.h | 18 +-
drivers/net/ixgbe/ixgbe_fdir.c | 24 +-
drivers/net/ixgbe/ixgbe_flow.c | 2 +-
drivers/net/ixgbe/ixgbe_ipsec.c | 12 +-
drivers/net/ixgbe/ixgbe_pf.c | 38 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 253 +++--
drivers/net/ixgbe/ixgbe_rxtx.h | 4 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 2 +-
drivers/net/ixgbe/ixgbe_tm.c | 16 +-
drivers/net/ixgbe/ixgbe_vf_representor.c | 16 +-
drivers/net/ixgbe/rte_pmd_ixgbe.c | 14 +-
drivers/net/ixgbe/rte_pmd_ixgbe.h | 4 +-
drivers/net/kni/rte_eth_kni.c | 8 +-
drivers/net/liquidio/lio_ethdev.c | 118 +--
drivers/net/memif/memif_socket.c | 2 +-
drivers/net/memif/rte_eth_memif.c | 14 +-
drivers/net/mlx4/mlx4_ethdev.c | 32 +-
drivers/net/mlx4/mlx4_flow.c | 30 +-
drivers/net/mlx4/mlx4_intr.c | 8 +-
drivers/net/mlx4/mlx4_rxq.c | 20 +-
drivers/net/mlx4/mlx4_txq.c | 24 +-
drivers/net/mlx5/linux/mlx5_ethdev_os.c | 54 +-
drivers/net/mlx5/linux/mlx5_os.c | 6 +-
drivers/net/mlx5/mlx5.c | 4 +-
drivers/net/mlx5/mlx5.h | 2 +-
drivers/net/mlx5/mlx5_defs.h | 6 +-
drivers/net/mlx5/mlx5_ethdev.c | 6 +-
drivers/net/mlx5/mlx5_flow.c | 54 +-
drivers/net/mlx5/mlx5_flow.h | 12 +-
drivers/net/mlx5/mlx5_flow_dv.c | 44 +-
drivers/net/mlx5/mlx5_flow_verbs.c | 4 +-
| 10 +-
drivers/net/mlx5/mlx5_rxq.c | 42 +-
drivers/net/mlx5/mlx5_rxtx_vec.h | 8 +-
drivers/net/mlx5/mlx5_tx.c | 30 +-
drivers/net/mlx5/mlx5_txq.c | 52 +-
drivers/net/mlx5/mlx5_vlan.c | 4 +-
drivers/net/mlx5/windows/mlx5_os.c | 4 +-
drivers/net/mvneta/mvneta_ethdev.c | 34 +-
drivers/net/mvneta/mvneta_ethdev.h | 12 +-
drivers/net/mvneta/mvneta_rxtx.c | 2 +-
drivers/net/mvpp2/mrvl_ethdev.c | 116 +-
drivers/net/netvsc/hn_ethdev.c | 70 +-
drivers/net/netvsc/hn_rndis.c | 50 +-
drivers/net/nfb/nfb_ethdev.c | 20 +-
drivers/net/nfb/nfb_rx.c | 2 +-
drivers/net/nfp/nfp_common.c | 130 +--
drivers/net/nfp/nfp_ethdev.c | 2 +-
drivers/net/nfp/nfp_ethdev_vf.c | 2 +-
drivers/net/ngbe/ngbe_ethdev.c | 50 +-
drivers/net/null/rte_eth_null.c | 28 +-
drivers/net/octeontx/octeontx_ethdev.c | 78 +-
drivers/net/octeontx/octeontx_ethdev.h | 32 +-
drivers/net/octeontx/octeontx_ethdev_ops.c | 26 +-
drivers/net/octeontx2/otx2_ethdev.c | 96 +-
drivers/net/octeontx2/otx2_ethdev.h | 66 +-
drivers/net/octeontx2/otx2_ethdev_devargs.c | 12 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 18 +-
drivers/net/octeontx2/otx2_ethdev_sec.c | 8 +-
drivers/net/octeontx2/otx2_flow.c | 2 +-
drivers/net/octeontx2/otx2_flow_ctrl.c | 36 +-
drivers/net/octeontx2/otx2_flow_parse.c | 4 +-
drivers/net/octeontx2/otx2_link.c | 40 +-
drivers/net/octeontx2/otx2_mcast.c | 2 +-
drivers/net/octeontx2/otx2_ptp.c | 4 +-
| 70 +-
drivers/net/octeontx2/otx2_rx.c | 4 +-
drivers/net/octeontx2/otx2_tx.c | 2 +-
drivers/net/octeontx2/otx2_vlan.c | 42 +-
drivers/net/octeontx_ep/otx_ep_ethdev.c | 8 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 8 +-
drivers/net/pcap/pcap_ethdev.c | 12 +-
drivers/net/pfe/pfe_ethdev.c | 18 +-
drivers/net/qede/base/mcp_public.h | 4 +-
drivers/net/qede/qede_ethdev.c | 152 +--
drivers/net/qede/qede_filter.c | 10 +-
drivers/net/qede/qede_rxtx.c | 2 +-
drivers/net/qede/qede_rxtx.h | 16 +-
drivers/net/ring/rte_eth_ring.c | 20 +-
drivers/net/sfc/sfc.c | 30 +-
drivers/net/sfc/sfc_ef100_rx.c | 10 +-
drivers/net/sfc/sfc_ef100_tx.c | 20 +-
drivers/net/sfc/sfc_ef10_essb_rx.c | 4 +-
drivers/net/sfc/sfc_ef10_rx.c | 8 +-
drivers/net/sfc/sfc_ef10_tx.c | 32 +-
drivers/net/sfc/sfc_ethdev.c | 52 +-
drivers/net/sfc/sfc_flow.c | 2 +-
drivers/net/sfc/sfc_port.c | 54 +-
drivers/net/sfc/sfc_rx.c | 52 +-
drivers/net/sfc/sfc_tx.c | 50 +-
drivers/net/softnic/rte_eth_softnic.c | 12 +-
drivers/net/szedata2/rte_eth_szedata2.c | 14 +-
drivers/net/tap/rte_eth_tap.c | 104 +-
| 2 +-
drivers/net/thunderx/nicvf_ethdev.c | 108 +-
drivers/net/thunderx/nicvf_ethdev.h | 42 +-
drivers/net/txgbe/txgbe_ethdev.c | 244 ++---
drivers/net/txgbe/txgbe_ethdev.h | 18 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 24 +-
drivers/net/txgbe/txgbe_fdir.c | 20 +-
drivers/net/txgbe/txgbe_flow.c | 2 +-
drivers/net/txgbe/txgbe_ipsec.c | 12 +-
drivers/net/txgbe/txgbe_pf.c | 34 +-
drivers/net/txgbe/txgbe_rxtx.c | 312 +++---
drivers/net/txgbe/txgbe_rxtx.h | 4 +-
drivers/net/txgbe/txgbe_tm.c | 16 +-
drivers/net/vhost/rte_eth_vhost.c | 16 +-
drivers/net/virtio/virtio_ethdev.c | 126 +--
drivers/net/vmxnet3/vmxnet3_ethdev.c | 74 +-
drivers/net/vmxnet3/vmxnet3_ethdev.h | 16 +-
drivers/net/vmxnet3/vmxnet3_rxtx.c | 16 +-
examples/bbdev_app/main.c | 6 +-
examples/bond/main.c | 14 +-
examples/distributor/main.c | 12 +-
examples/ethtool/ethtool-app/main.c | 2 +-
examples/ethtool/lib/rte_ethtool.c | 18 +-
.../pipeline_worker_generic.c | 16 +-
.../eventdev_pipeline/pipeline_worker_tx.c | 12 +-
examples/flow_classify/flow_classify.c | 4 +-
examples/flow_filtering/main.c | 16 +-
examples/ioat/ioatfwd.c | 8 +-
examples/ip_fragmentation/main.c | 14 +-
examples/ip_pipeline/link.c | 20 +-
examples/ip_reassembly/main.c | 20 +-
examples/ipsec-secgw/ipsec-secgw.c | 34 +-
examples/ipsec-secgw/sa.c | 8 +-
examples/ipv4_multicast/main.c | 8 +-
examples/kni/main.c | 12 +-
examples/l2fwd-crypto/main.c | 10 +-
examples/l2fwd-event/l2fwd_common.c | 10 +-
examples/l2fwd-event/main.c | 2 +-
examples/l2fwd-jobstats/main.c | 8 +-
examples/l2fwd-keepalive/main.c | 8 +-
examples/l2fwd/main.c | 8 +-
examples/l3fwd-acl/main.c | 20 +-
examples/l3fwd-graph/main.c | 16 +-
examples/l3fwd-power/main.c | 18 +-
examples/l3fwd/l3fwd_event.c | 4 +-
examples/l3fwd/main.c | 20 +-
examples/link_status_interrupt/main.c | 10 +-
.../client_server_mp/mp_server/init.c | 4 +-
examples/multi_process/symmetric_mp/main.c | 14 +-
examples/ntb/ntb_fwd.c | 6 +-
examples/packet_ordering/main.c | 4 +-
.../performance-thread/l3fwd-thread/main.c | 18 +-
examples/pipeline/obj.c | 20 +-
examples/ptpclient/ptpclient.c | 10 +-
examples/qos_meter/main.c | 16 +-
examples/qos_sched/init.c | 6 +-
examples/rxtx_callbacks/main.c | 8 +-
examples/server_node_efd/server/init.c | 8 +-
examples/skeleton/basicfwd.c | 4 +-
examples/vhost/main.c | 28 +-
examples/vm_power_manager/main.c | 6 +-
examples/vmdq/main.c | 20 +-
examples/vmdq_dcb/main.c | 40 +-
lib/ethdev/rte_ethdev.c | 193 ++--
lib/ethdev/rte_ethdev.h | 997 +++++++++++-------
lib/ethdev/rte_ethdev_core.h | 2 +-
lib/ethdev/rte_flow.h | 2 +-
lib/gso/rte_gso.c | 20 +-
lib/gso/rte_gso.h | 4 +-
lib/mbuf/rte_mbuf_core.h | 8 +-
lib/mbuf/rte_mbuf_dyn.h | 2 +-
339 files changed, 6728 insertions(+), 6500 deletions(-)
diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index a8e928fa9ff3..963b6aa5c589 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -757,11 +757,11 @@ show_port(void)
}
ret = rte_eth_dev_flow_ctrl_get(i, &fc_conf);
- if (ret == 0 && fc_conf.mode != RTE_FC_NONE) {
+ if (ret == 0 && fc_conf.mode != RTE_ETH_FC_NONE) {
printf("\t -- flow control mode %s%s high %u low %u pause %u%s%s\n",
- fc_conf.mode == RTE_FC_RX_PAUSE ? "rx " :
- fc_conf.mode == RTE_FC_TX_PAUSE ? "tx " :
- fc_conf.mode == RTE_FC_FULL ? "full" : "???",
+ fc_conf.mode == RTE_ETH_FC_RX_PAUSE ? "rx " :
+ fc_conf.mode == RTE_ETH_FC_TX_PAUSE ? "tx " :
+ fc_conf.mode == RTE_ETH_FC_FULL ? "full" : "???",
fc_conf.autoneg ? " auto" : "",
fc_conf.high_water,
fc_conf.low_water,
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index cc100650c21e..41e92143121b 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -668,14 +668,14 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
struct test_perf *t = evt_test_priv(test);
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
};
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 6ee530d4cdc9..96c8a5828364 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -176,12 +176,12 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
struct rte_eth_rxconf rx_conf;
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
};
@@ -199,7 +199,7 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
t->internal_port = 1;
RTE_ETH_FOREACH_DEV(i) {
@@ -224,7 +224,7 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT))
local_port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
ret = rte_eth_dev_info_get(i, &dev_info);
if (ret != 0) {
@@ -234,9 +234,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
}
/* Enable mbuf fast free if PMD has the capability. */
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rx_conf = dev_info.default_rxconf;
rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h
index a14d4e05e185..4249b6175b82 100644
--- a/app/test-flow-perf/config.h
+++ b/app/test-flow-perf/config.h
@@ -5,7 +5,7 @@
#define FLOW_ITEM_MASK(_x) (UINT64_C(1) << _x)
#define FLOW_ACTION_MASK(_x) (UINT64_C(1) << _x)
#define FLOW_ATTR_MASK(_x) (UINT64_C(1) << _x)
-#define GET_RSS_HF() (ETH_RSS_IP)
+#define GET_RSS_HF() (RTE_ETH_RSS_IP)
/* Configuration */
#define RXQ_NUM 4
diff --git a/app/test-pipeline/init.c b/app/test-pipeline/init.c
index fe37d63730c6..c73801904103 100644
--- a/app/test-pipeline/init.c
+++ b/app/test-pipeline/init.c
@@ -70,16 +70,16 @@ struct app_params app = {
static struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -178,7 +178,7 @@ app_ports_check_link(void)
RTE_LOG(INFO, USER1, "Port %u %s\n",
port,
link_status_text);
- if (link.link_status == ETH_LINK_DOWN)
+ if (link.link_status == RTE_ETH_LINK_DOWN)
all_ports_up = 0;
}
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 82253bc75110..9ff9847dffa0 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1490,51 +1490,51 @@ parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
int duplex;
if (!strcmp(duplexstr, "half")) {
- duplex = ETH_LINK_HALF_DUPLEX;
+ duplex = RTE_ETH_LINK_HALF_DUPLEX;
} else if (!strcmp(duplexstr, "full")) {
- duplex = ETH_LINK_FULL_DUPLEX;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
} else if (!strcmp(duplexstr, "auto")) {
- duplex = ETH_LINK_FULL_DUPLEX;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
} else {
fprintf(stderr, "Unknown duplex parameter\n");
return -1;
}
if (!strcmp(speedstr, "10")) {
- *speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
- ETH_LINK_SPEED_10M_HD : ETH_LINK_SPEED_10M;
+ *speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+ RTE_ETH_LINK_SPEED_10M_HD : RTE_ETH_LINK_SPEED_10M;
} else if (!strcmp(speedstr, "100")) {
- *speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
- ETH_LINK_SPEED_100M_HD : ETH_LINK_SPEED_100M;
+ *speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+ RTE_ETH_LINK_SPEED_100M_HD : RTE_ETH_LINK_SPEED_100M;
} else {
- if (duplex != ETH_LINK_FULL_DUPLEX) {
+ if (duplex != RTE_ETH_LINK_FULL_DUPLEX) {
fprintf(stderr, "Invalid speed/duplex parameters\n");
return -1;
}
if (!strcmp(speedstr, "1000")) {
- *speed = ETH_LINK_SPEED_1G;
+ *speed = RTE_ETH_LINK_SPEED_1G;
} else if (!strcmp(speedstr, "10000")) {
- *speed = ETH_LINK_SPEED_10G;
+ *speed = RTE_ETH_LINK_SPEED_10G;
} else if (!strcmp(speedstr, "25000")) {
- *speed = ETH_LINK_SPEED_25G;
+ *speed = RTE_ETH_LINK_SPEED_25G;
} else if (!strcmp(speedstr, "40000")) {
- *speed = ETH_LINK_SPEED_40G;
+ *speed = RTE_ETH_LINK_SPEED_40G;
} else if (!strcmp(speedstr, "50000")) {
- *speed = ETH_LINK_SPEED_50G;
+ *speed = RTE_ETH_LINK_SPEED_50G;
} else if (!strcmp(speedstr, "100000")) {
- *speed = ETH_LINK_SPEED_100G;
+ *speed = RTE_ETH_LINK_SPEED_100G;
} else if (!strcmp(speedstr, "200000")) {
- *speed = ETH_LINK_SPEED_200G;
+ *speed = RTE_ETH_LINK_SPEED_200G;
} else if (!strcmp(speedstr, "auto")) {
- *speed = ETH_LINK_SPEED_AUTONEG;
+ *speed = RTE_ETH_LINK_SPEED_AUTONEG;
} else {
fprintf(stderr, "Unknown speed parameter\n");
return -1;
}
}
- if (*speed != ETH_LINK_SPEED_AUTONEG)
- *speed |= ETH_LINK_SPEED_FIXED;
+ if (*speed != RTE_ETH_LINK_SPEED_AUTONEG)
+ *speed |= RTE_ETH_LINK_SPEED_FIXED;
return 0;
}
@@ -2185,33 +2185,33 @@ cmd_config_rss_parsed(void *parsed_result,
int ret;
if (!strcmp(res->value, "all"))
- rss_conf.rss_hf = ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP |
- ETH_RSS_TCP | ETH_RSS_UDP | ETH_RSS_SCTP |
- ETH_RSS_L2_PAYLOAD | ETH_RSS_L2TPV3 | ETH_RSS_ESP |
- ETH_RSS_AH | ETH_RSS_PFCP | ETH_RSS_GTPU |
- ETH_RSS_ECPRI;
+ rss_conf.rss_hf = RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP |
+ RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP |
+ RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP | RTE_ETH_RSS_GTPU |
+ RTE_ETH_RSS_ECPRI;
else if (!strcmp(res->value, "eth"))
- rss_conf.rss_hf = ETH_RSS_ETH;
+ rss_conf.rss_hf = RTE_ETH_RSS_ETH;
else if (!strcmp(res->value, "vlan"))
- rss_conf.rss_hf = ETH_RSS_VLAN;
+ rss_conf.rss_hf = RTE_ETH_RSS_VLAN;
else if (!strcmp(res->value, "ip"))
- rss_conf.rss_hf = ETH_RSS_IP;
+ rss_conf.rss_hf = RTE_ETH_RSS_IP;
else if (!strcmp(res->value, "udp"))
- rss_conf.rss_hf = ETH_RSS_UDP;
+ rss_conf.rss_hf = RTE_ETH_RSS_UDP;
else if (!strcmp(res->value, "tcp"))
- rss_conf.rss_hf = ETH_RSS_TCP;
+ rss_conf.rss_hf = RTE_ETH_RSS_TCP;
else if (!strcmp(res->value, "sctp"))
- rss_conf.rss_hf = ETH_RSS_SCTP;
+ rss_conf.rss_hf = RTE_ETH_RSS_SCTP;
else if (!strcmp(res->value, "ether"))
- rss_conf.rss_hf = ETH_RSS_L2_PAYLOAD;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2_PAYLOAD;
else if (!strcmp(res->value, "port"))
- rss_conf.rss_hf = ETH_RSS_PORT;
+ rss_conf.rss_hf = RTE_ETH_RSS_PORT;
else if (!strcmp(res->value, "vxlan"))
- rss_conf.rss_hf = ETH_RSS_VXLAN;
+ rss_conf.rss_hf = RTE_ETH_RSS_VXLAN;
else if (!strcmp(res->value, "geneve"))
- rss_conf.rss_hf = ETH_RSS_GENEVE;
+ rss_conf.rss_hf = RTE_ETH_RSS_GENEVE;
else if (!strcmp(res->value, "nvgre"))
- rss_conf.rss_hf = ETH_RSS_NVGRE;
+ rss_conf.rss_hf = RTE_ETH_RSS_NVGRE;
else if (!strcmp(res->value, "l3-pre32"))
rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE32;
else if (!strcmp(res->value, "l3-pre40"))
@@ -2225,44 +2225,44 @@ cmd_config_rss_parsed(void *parsed_result,
else if (!strcmp(res->value, "l3-pre96"))
rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE96;
else if (!strcmp(res->value, "l3-src-only"))
- rss_conf.rss_hf = ETH_RSS_L3_SRC_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L3_SRC_ONLY;
else if (!strcmp(res->value, "l3-dst-only"))
- rss_conf.rss_hf = ETH_RSS_L3_DST_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L3_DST_ONLY;
else if (!strcmp(res->value, "l4-src-only"))
- rss_conf.rss_hf = ETH_RSS_L4_SRC_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L4_SRC_ONLY;
else if (!strcmp(res->value, "l4-dst-only"))
- rss_conf.rss_hf = ETH_RSS_L4_DST_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L4_DST_ONLY;
else if (!strcmp(res->value, "l2-src-only"))
- rss_conf.rss_hf = ETH_RSS_L2_SRC_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2_SRC_ONLY;
else if (!strcmp(res->value, "l2-dst-only"))
- rss_conf.rss_hf = ETH_RSS_L2_DST_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2_DST_ONLY;
else if (!strcmp(res->value, "l2tpv3"))
- rss_conf.rss_hf = ETH_RSS_L2TPV3;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2TPV3;
else if (!strcmp(res->value, "esp"))
- rss_conf.rss_hf = ETH_RSS_ESP;
+ rss_conf.rss_hf = RTE_ETH_RSS_ESP;
else if (!strcmp(res->value, "ah"))
- rss_conf.rss_hf = ETH_RSS_AH;
+ rss_conf.rss_hf = RTE_ETH_RSS_AH;
else if (!strcmp(res->value, "pfcp"))
- rss_conf.rss_hf = ETH_RSS_PFCP;
+ rss_conf.rss_hf = RTE_ETH_RSS_PFCP;
else if (!strcmp(res->value, "pppoe"))
- rss_conf.rss_hf = ETH_RSS_PPPOE;
+ rss_conf.rss_hf = RTE_ETH_RSS_PPPOE;
else if (!strcmp(res->value, "gtpu"))
- rss_conf.rss_hf = ETH_RSS_GTPU;
+ rss_conf.rss_hf = RTE_ETH_RSS_GTPU;
else if (!strcmp(res->value, "ecpri"))
- rss_conf.rss_hf = ETH_RSS_ECPRI;
+ rss_conf.rss_hf = RTE_ETH_RSS_ECPRI;
else if (!strcmp(res->value, "mpls"))
- rss_conf.rss_hf = ETH_RSS_MPLS;
+ rss_conf.rss_hf = RTE_ETH_RSS_MPLS;
else if (!strcmp(res->value, "none"))
rss_conf.rss_hf = 0;
else if (!strcmp(res->value, "level-default")) {
- rss_hf &= (~ETH_RSS_LEVEL_MASK);
- rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_PMD_DEFAULT);
+ rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+ rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_PMD_DEFAULT);
} else if (!strcmp(res->value, "level-outer")) {
- rss_hf &= (~ETH_RSS_LEVEL_MASK);
- rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_OUTERMOST);
+ rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+ rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_OUTERMOST);
} else if (!strcmp(res->value, "level-inner")) {
- rss_hf &= (~ETH_RSS_LEVEL_MASK);
- rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_INNERMOST);
+ rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+ rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_INNERMOST);
} else if (!strcmp(res->value, "default"))
use_default = 1;
else if (isdigit(res->value[0]) && atoi(res->value) > 0 &&
@@ -2999,8 +2999,8 @@ parse_reta_config(const char *str,
return -1;
}
- idx = hash_index / RTE_RETA_GROUP_SIZE;
- shift = hash_index % RTE_RETA_GROUP_SIZE;
+ idx = hash_index / RTE_ETH_RETA_GROUP_SIZE;
+ shift = hash_index % RTE_ETH_RETA_GROUP_SIZE;
reta_conf[idx].mask |= (1ULL << shift);
reta_conf[idx].reta[shift] = nb_queue;
}
@@ -3029,10 +3029,10 @@ cmd_set_rss_reta_parsed(void *parsed_result,
} else
printf("The reta size of port %d is %u\n",
res->port_id, dev_info.reta_size);
- if (dev_info.reta_size > ETH_RSS_RETA_SIZE_512) {
+ if (dev_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
fprintf(stderr,
"Currently do not support more than %u entries of redirection table\n",
- ETH_RSS_RETA_SIZE_512);
+ RTE_ETH_RSS_RETA_SIZE_512);
return;
}
@@ -3103,8 +3103,8 @@ showport_parse_reta_config(struct rte_eth_rss_reta_entry64 *conf,
char *end;
char *str_fld[8];
uint16_t i;
- uint16_t num = (nb_entries + RTE_RETA_GROUP_SIZE - 1) /
- RTE_RETA_GROUP_SIZE;
+ uint16_t num = (nb_entries + RTE_ETH_RETA_GROUP_SIZE - 1) /
+ RTE_ETH_RETA_GROUP_SIZE;
int ret;
p = strchr(p0, '(');
@@ -3149,7 +3149,7 @@ cmd_showport_reta_parsed(void *parsed_result,
if (ret != 0)
return;
- max_reta_size = RTE_MIN(dev_info.reta_size, ETH_RSS_RETA_SIZE_512);
+ max_reta_size = RTE_MIN(dev_info.reta_size, RTE_ETH_RSS_RETA_SIZE_512);
if (res->size == 0 || res->size > max_reta_size) {
fprintf(stderr, "Invalid redirection table size: %u (1-%u)\n",
res->size, max_reta_size);
@@ -3289,7 +3289,7 @@ cmd_config_dcb_parsed(void *parsed_result,
return;
}
- if ((res->num_tcs != ETH_4_TCS) && (res->num_tcs != ETH_8_TCS)) {
+ if ((res->num_tcs != RTE_ETH_4_TCS) && (res->num_tcs != RTE_ETH_8_TCS)) {
fprintf(stderr,
"The invalid number of traffic class, only 4 or 8 allowed.\n");
return;
@@ -4293,9 +4293,9 @@ cmd_vlan_tpid_parsed(void *parsed_result,
enum rte_vlan_type vlan_type;
if (!strcmp(res->vlan_type, "inner"))
- vlan_type = ETH_VLAN_TYPE_INNER;
+ vlan_type = RTE_ETH_VLAN_TYPE_INNER;
else if (!strcmp(res->vlan_type, "outer"))
- vlan_type = ETH_VLAN_TYPE_OUTER;
+ vlan_type = RTE_ETH_VLAN_TYPE_OUTER;
else {
fprintf(stderr, "Unknown vlan type\n");
return;
@@ -4632,55 +4632,55 @@ csum_show(int port_id)
printf("Parse tunnel is %s\n",
(ports[port_id].parse_tunnel) ? "on" : "off");
printf("IP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
printf("UDP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
printf("TCP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
printf("SCTP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
printf("Outer-Ip checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
printf("Outer-Udp checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
/* display warnings if configuration is not supported by the NIC */
ret = eth_dev_info_get_print_err(port_id, &dev_info);
if (ret != 0)
return;
- if ((tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware IP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware UDP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware TCP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_SCTP_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware SCTP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware outer IP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
== 0) {
fprintf(stderr,
"Warning: hardware outer UDP checksum enabled but not supported by port %d\n",
@@ -4730,8 +4730,8 @@ cmd_csum_parsed(void *parsed_result,
if (!strcmp(res->proto, "ip")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_IPV4_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
} else {
fprintf(stderr,
"IP checksum offload is not supported by port %u\n",
@@ -4739,8 +4739,8 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "udp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_UDP_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
} else {
fprintf(stderr,
"UDP checksum offload is not supported by port %u\n",
@@ -4748,8 +4748,8 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "tcp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_TCP_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
} else {
fprintf(stderr,
"TCP checksum offload is not supported by port %u\n",
@@ -4757,8 +4757,8 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "sctp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_SCTP_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_SCTP_CKSUM;
} else {
fprintf(stderr,
"SCTP checksum offload is not supported by port %u\n",
@@ -4766,9 +4766,9 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "outer-ip")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
csum_offloads |=
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
} else {
fprintf(stderr,
"Outer IP checksum offload is not supported by port %u\n",
@@ -4776,9 +4776,9 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "outer-udp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
csum_offloads |=
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
} else {
fprintf(stderr,
"Outer UDP checksum offload is not supported by port %u\n",
@@ -4933,7 +4933,7 @@ cmd_tso_set_parsed(void *parsed_result,
return;
if ((ports[res->port_id].tso_segsz != 0) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
fprintf(stderr, "Error: TSO is not supported by port %d\n",
res->port_id);
return;
@@ -4941,11 +4941,11 @@ cmd_tso_set_parsed(void *parsed_result,
if (ports[res->port_id].tso_segsz == 0) {
ports[res->port_id].dev_conf.txmode.offloads &=
- ~DEV_TX_OFFLOAD_TCP_TSO;
+ ~RTE_ETH_TX_OFFLOAD_TCP_TSO;
printf("TSO for non-tunneled packets is disabled\n");
} else {
ports[res->port_id].dev_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_TCP_TSO;
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
printf("TSO segment size for non-tunneled packets is %d\n",
ports[res->port_id].tso_segsz);
}
@@ -4957,7 +4957,7 @@ cmd_tso_set_parsed(void *parsed_result,
return;
if ((ports[res->port_id].tso_segsz != 0) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
fprintf(stderr,
"Warning: TSO enabled but not supported by port %d\n",
res->port_id);
@@ -5028,27 +5028,27 @@ check_tunnel_tso_nic_support(portid_t port_id)
if (eth_dev_info_get_print_err(port_id, &dev_info) != 0)
return dev_info;
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VXLAN_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO))
fprintf(stderr,
"Warning: VXLAN TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
fprintf(stderr,
"Warning: GRE TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPIP_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO))
fprintf(stderr,
"Warning: IPIP TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
fprintf(stderr,
"Warning: GENEVE TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IP_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IP_TNL_TSO))
fprintf(stderr,
"Warning: IP TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO))
fprintf(stderr,
"Warning: UDP TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
@@ -5076,20 +5076,20 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
dev_info = check_tunnel_tso_nic_support(res->port_id);
if (ports[res->port_id].tunnel_tso_segsz == 0) {
ports[res->port_id].dev_conf.txmode.offloads &=
- ~(DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ ~(RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
printf("TSO for tunneled packets is disabled\n");
} else {
- uint64_t tso_offloads = (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ uint64_t tso_offloads = (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
ports[res->port_id].dev_conf.txmode.offloads |=
(tso_offloads & dev_info.tx_offload_capa);
@@ -5112,7 +5112,7 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
fprintf(stderr,
"Warning: csum parse_tunnel must be set so that tunneled packets are recognized\n");
if (!(ports[res->port_id].dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
fprintf(stderr,
"Warning: csum set outer-ip must be set to hw if outer L3 is IPv4; not necessary for IPv6\n");
}
@@ -7058,9 +7058,9 @@ cmd_link_flow_ctrl_show_parsed(void *parsed_result,
return;
}
- if (fc_conf.mode == RTE_FC_RX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+ if (fc_conf.mode == RTE_ETH_FC_RX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
rx_fc_en = true;
- if (fc_conf.mode == RTE_FC_TX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+ if (fc_conf.mode == RTE_ETH_FC_TX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
tx_fc_en = true;
printf("\n%s Flow control infos for port %-2d %s\n",
@@ -7338,12 +7338,12 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
/*
* Rx on/off, flow control is enabled/disabled on RX side. This can indicate
- * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+ * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
* Tx on/off, flow control is enabled/disabled on TX side. This can indicate
- * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+ * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
*/
static enum rte_eth_fc_mode rx_tx_onoff_2_lfc_mode[2][2] = {
- {RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+ {RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
};
/* Partial command line, retrieve current configuration */
@@ -7356,11 +7356,11 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
return;
}
- if ((fc_conf.mode == RTE_FC_RX_PAUSE) ||
- (fc_conf.mode == RTE_FC_FULL))
+ if ((fc_conf.mode == RTE_ETH_FC_RX_PAUSE) ||
+ (fc_conf.mode == RTE_ETH_FC_FULL))
rx_fc_en = 1;
- if ((fc_conf.mode == RTE_FC_TX_PAUSE) ||
- (fc_conf.mode == RTE_FC_FULL))
+ if ((fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ||
+ (fc_conf.mode == RTE_ETH_FC_FULL))
tx_fc_en = 1;
}
@@ -7428,12 +7428,12 @@ cmd_priority_flow_ctrl_set_parsed(void *parsed_result,
/*
* Rx on/off, flow control is enabled/disabled on RX side. This can indicate
- * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+ * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
* Tx on/off, flow control is enabled/disabled on TX side. This can indicate
- * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+ * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
*/
static enum rte_eth_fc_mode rx_tx_onoff_2_pfc_mode[2][2] = {
- {RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+ {RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
};
memset(&pfc_conf, 0, sizeof(struct rte_eth_pfc_conf));
@@ -8950,13 +8950,13 @@ cmd_set_vf_rxmode_parsed(void *parsed_result,
int is_on = (strcmp(res->on, "on") == 0) ? 1 : 0;
if (!strcmp(res->what,"rxmode")) {
if (!strcmp(res->mode, "AUPE"))
- vf_rxmode |= ETH_VMDQ_ACCEPT_UNTAG;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_UNTAG;
else if (!strcmp(res->mode, "ROPE"))
- vf_rxmode |= ETH_VMDQ_ACCEPT_HASH_UC;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_HASH_UC;
else if (!strcmp(res->mode, "BAM"))
- vf_rxmode |= ETH_VMDQ_ACCEPT_BROADCAST;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_BROADCAST;
else if (!strncmp(res->mode, "MPE",3))
- vf_rxmode |= ETH_VMDQ_ACCEPT_MULTICAST;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_MULTICAST;
}
RTE_SET_USED(is_on);
@@ -9356,7 +9356,7 @@ cmd_tunnel_udp_config_parsed(void *parsed_result,
int ret;
tunnel_udp.udp_port = res->udp_port;
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
if (!strcmp(res->what, "add"))
ret = rte_eth_dev_udp_tunnel_port_add(res->port_id,
@@ -9422,13 +9422,13 @@ cmd_cfg_tunnel_udp_port_parsed(void *parsed_result,
tunnel_udp.udp_port = res->udp_port;
if (!strcmp(res->tunnel_type, "vxlan")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
} else if (!strcmp(res->tunnel_type, "geneve")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_GENEVE;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_GENEVE;
} else if (!strcmp(res->tunnel_type, "vxlan-gpe")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN_GPE;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN_GPE;
} else if (!strcmp(res->tunnel_type, "ecpri")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_ECPRI;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_ECPRI;
} else {
fprintf(stderr, "Invalid tunnel type\n");
return;
@@ -9543,20 +9543,20 @@ cmd_set_mirror_mask_parsed(void *parsed_result,
memset(&mr_conf, 0, sizeof(struct rte_eth_mirror_conf));
- unsigned int vlan_list[ETH_MIRROR_MAX_VLANS];
+ unsigned int vlan_list[RTE_ETH_MIRROR_MAX_VLANS];
mr_conf.dst_pool = res->dstpool_id;
if (!strcmp(res->what, "pool-mirror-up")) {
mr_conf.pool_mask = strtoull(res->value, NULL, 16);
- mr_conf.rule_type = ETH_MIRROR_VIRTUAL_POOL_UP;
+ mr_conf.rule_type = RTE_ETH_MIRROR_VIRTUAL_POOL_UP;
} else if (!strcmp(res->what, "pool-mirror-down")) {
mr_conf.pool_mask = strtoull(res->value, NULL, 16);
- mr_conf.rule_type = ETH_MIRROR_VIRTUAL_POOL_DOWN;
+ mr_conf.rule_type = RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN;
} else if (!strcmp(res->what, "vlan-mirror")) {
- mr_conf.rule_type = ETH_MIRROR_VLAN;
+ mr_conf.rule_type = RTE_ETH_MIRROR_VLAN;
nb_item = parse_item_list(res->value, "vlan",
- ETH_MIRROR_MAX_VLANS, vlan_list, 1);
+ RTE_ETH_MIRROR_MAX_VLANS, vlan_list, 1);
if (nb_item <= 0)
return;
@@ -9656,9 +9656,9 @@ cmd_set_mirror_link_parsed(void *parsed_result,
memset(&mr_conf, 0, sizeof(struct rte_eth_mirror_conf));
if (!strcmp(res->what, "uplink-mirror"))
- mr_conf.rule_type = ETH_MIRROR_UPLINK_PORT;
+ mr_conf.rule_type = RTE_ETH_MIRROR_UPLINK_PORT;
else
- mr_conf.rule_type = ETH_MIRROR_DOWNLINK_PORT;
+ mr_conf.rule_type = RTE_ETH_MIRROR_DOWNLINK_PORT;
mr_conf.dst_pool = res->dstpool_id;
@@ -11823,7 +11823,7 @@ cmd_set_macsec_offload_on_parsed(
if (ret != 0)
return;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
#ifdef RTE_NET_IXGBE
ret = rte_pmd_ixgbe_macsec_enable(port_id, en, rp);
#endif
@@ -11834,7 +11834,7 @@ cmd_set_macsec_offload_on_parsed(
switch (ret) {
case 0:
ports[port_id].dev_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MACSEC_INSERT;
+ RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
cmd_reconfig_device_queue(port_id, 1, 1);
break;
case -ENODEV:
@@ -11920,7 +11920,7 @@ cmd_set_macsec_offload_off_parsed(
if (ret != 0)
return;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
#ifdef RTE_NET_IXGBE
ret = rte_pmd_ixgbe_macsec_disable(port_id);
#endif
@@ -11928,7 +11928,7 @@ cmd_set_macsec_offload_off_parsed(
switch (ret) {
case 0:
ports[port_id].dev_conf.txmode.offloads &=
- ~DEV_TX_OFFLOAD_MACSEC_INSERT;
+ ~RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
cmd_reconfig_device_queue(port_id, 1, 1);
break;
case -ENODEV:
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 31d8ba1b913c..e9520e045aa0 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -86,60 +86,60 @@ static const struct {
};
const struct rss_type_info rss_type_table[] = {
- { "all", ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP | ETH_RSS_TCP |
- ETH_RSS_UDP | ETH_RSS_SCTP | ETH_RSS_L2_PAYLOAD |
- ETH_RSS_L2TPV3 | ETH_RSS_ESP | ETH_RSS_AH | ETH_RSS_PFCP |
- ETH_RSS_GTPU | ETH_RSS_ECPRI | ETH_RSS_MPLS},
+ { "all", RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_L2_PAYLOAD |
+ RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP | RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP |
+ RTE_ETH_RSS_GTPU | RTE_ETH_RSS_ECPRI | RTE_ETH_RSS_MPLS},
{ "none", 0 },
- { "eth", ETH_RSS_ETH },
- { "l2-src-only", ETH_RSS_L2_SRC_ONLY },
- { "l2-dst-only", ETH_RSS_L2_DST_ONLY },
- { "vlan", ETH_RSS_VLAN },
- { "s-vlan", ETH_RSS_S_VLAN },
- { "c-vlan", ETH_RSS_C_VLAN },
- { "ipv4", ETH_RSS_IPV4 },
- { "ipv4-frag", ETH_RSS_FRAG_IPV4 },
- { "ipv4-tcp", ETH_RSS_NONFRAG_IPV4_TCP },
- { "ipv4-udp", ETH_RSS_NONFRAG_IPV4_UDP },
- { "ipv4-sctp", ETH_RSS_NONFRAG_IPV4_SCTP },
- { "ipv4-other", ETH_RSS_NONFRAG_IPV4_OTHER },
- { "ipv6", ETH_RSS_IPV6 },
- { "ipv6-frag", ETH_RSS_FRAG_IPV6 },
- { "ipv6-tcp", ETH_RSS_NONFRAG_IPV6_TCP },
- { "ipv6-udp", ETH_RSS_NONFRAG_IPV6_UDP },
- { "ipv6-sctp", ETH_RSS_NONFRAG_IPV6_SCTP },
- { "ipv6-other", ETH_RSS_NONFRAG_IPV6_OTHER },
- { "l2-payload", ETH_RSS_L2_PAYLOAD },
- { "ipv6-ex", ETH_RSS_IPV6_EX },
- { "ipv6-tcp-ex", ETH_RSS_IPV6_TCP_EX },
- { "ipv6-udp-ex", ETH_RSS_IPV6_UDP_EX },
- { "port", ETH_RSS_PORT },
- { "vxlan", ETH_RSS_VXLAN },
- { "geneve", ETH_RSS_GENEVE },
- { "nvgre", ETH_RSS_NVGRE },
- { "ip", ETH_RSS_IP },
- { "udp", ETH_RSS_UDP },
- { "tcp", ETH_RSS_TCP },
- { "sctp", ETH_RSS_SCTP },
- { "tunnel", ETH_RSS_TUNNEL },
+ { "eth", RTE_ETH_RSS_ETH },
+ { "l2-src-only", RTE_ETH_RSS_L2_SRC_ONLY },
+ { "l2-dst-only", RTE_ETH_RSS_L2_DST_ONLY },
+ { "vlan", RTE_ETH_RSS_VLAN },
+ { "s-vlan", RTE_ETH_RSS_S_VLAN },
+ { "c-vlan", RTE_ETH_RSS_C_VLAN },
+ { "ipv4", RTE_ETH_RSS_IPV4 },
+ { "ipv4-frag", RTE_ETH_RSS_FRAG_IPV4 },
+ { "ipv4-tcp", RTE_ETH_RSS_NONFRAG_IPV4_TCP },
+ { "ipv4-udp", RTE_ETH_RSS_NONFRAG_IPV4_UDP },
+ { "ipv4-sctp", RTE_ETH_RSS_NONFRAG_IPV4_SCTP },
+ { "ipv4-other", RTE_ETH_RSS_NONFRAG_IPV4_OTHER },
+ { "ipv6", RTE_ETH_RSS_IPV6 },
+ { "ipv6-frag", RTE_ETH_RSS_FRAG_IPV6 },
+ { "ipv6-tcp", RTE_ETH_RSS_NONFRAG_IPV6_TCP },
+ { "ipv6-udp", RTE_ETH_RSS_NONFRAG_IPV6_UDP },
+ { "ipv6-sctp", RTE_ETH_RSS_NONFRAG_IPV6_SCTP },
+ { "ipv6-other", RTE_ETH_RSS_NONFRAG_IPV6_OTHER },
+ { "l2-payload", RTE_ETH_RSS_L2_PAYLOAD },
+ { "ipv6-ex", RTE_ETH_RSS_IPV6_EX },
+ { "ipv6-tcp-ex", RTE_ETH_RSS_IPV6_TCP_EX },
+ { "ipv6-udp-ex", RTE_ETH_RSS_IPV6_UDP_EX },
+ { "port", RTE_ETH_RSS_PORT },
+ { "vxlan", RTE_ETH_RSS_VXLAN },
+ { "geneve", RTE_ETH_RSS_GENEVE },
+ { "nvgre", RTE_ETH_RSS_NVGRE },
+ { "ip", RTE_ETH_RSS_IP },
+ { "udp", RTE_ETH_RSS_UDP },
+ { "tcp", RTE_ETH_RSS_TCP },
+ { "sctp", RTE_ETH_RSS_SCTP },
+ { "tunnel", RTE_ETH_RSS_TUNNEL },
{ "l3-pre32", RTE_ETH_RSS_L3_PRE32 },
{ "l3-pre40", RTE_ETH_RSS_L3_PRE40 },
{ "l3-pre48", RTE_ETH_RSS_L3_PRE48 },
{ "l3-pre56", RTE_ETH_RSS_L3_PRE56 },
{ "l3-pre64", RTE_ETH_RSS_L3_PRE64 },
{ "l3-pre96", RTE_ETH_RSS_L3_PRE96 },
- { "l3-src-only", ETH_RSS_L3_SRC_ONLY },
- { "l3-dst-only", ETH_RSS_L3_DST_ONLY },
- { "l4-src-only", ETH_RSS_L4_SRC_ONLY },
- { "l4-dst-only", ETH_RSS_L4_DST_ONLY },
- { "esp", ETH_RSS_ESP },
- { "ah", ETH_RSS_AH },
- { "l2tpv3", ETH_RSS_L2TPV3 },
- { "pfcp", ETH_RSS_PFCP },
- { "pppoe", ETH_RSS_PPPOE },
- { "gtpu", ETH_RSS_GTPU },
- { "ecpri", ETH_RSS_ECPRI },
- { "mpls", ETH_RSS_MPLS },
+ { "l3-src-only", RTE_ETH_RSS_L3_SRC_ONLY },
+ { "l3-dst-only", RTE_ETH_RSS_L3_DST_ONLY },
+ { "l4-src-only", RTE_ETH_RSS_L4_SRC_ONLY },
+ { "l4-dst-only", RTE_ETH_RSS_L4_DST_ONLY },
+ { "esp", RTE_ETH_RSS_ESP },
+ { "ah", RTE_ETH_RSS_AH },
+ { "l2tpv3", RTE_ETH_RSS_L2TPV3 },
+ { "pfcp", RTE_ETH_RSS_PFCP },
+ { "pppoe", RTE_ETH_RSS_PPPOE },
+ { "gtpu", RTE_ETH_RSS_GTPU },
+ { "ecpri", RTE_ETH_RSS_ECPRI },
+ { "mpls", RTE_ETH_RSS_MPLS },
{ NULL, 0 },
};
@@ -474,39 +474,39 @@ static void
device_infos_display_speeds(uint32_t speed_capa)
{
printf("\n\tDevice speed capability:");
- if (speed_capa == ETH_LINK_SPEED_AUTONEG)
+ if (speed_capa == RTE_ETH_LINK_SPEED_AUTONEG)
printf(" Autonegotiate (all speeds)");
- if (speed_capa & ETH_LINK_SPEED_FIXED)
+ if (speed_capa & RTE_ETH_LINK_SPEED_FIXED)
printf(" Disable autonegotiate (fixed speed) ");
- if (speed_capa & ETH_LINK_SPEED_10M_HD)
+ if (speed_capa & RTE_ETH_LINK_SPEED_10M_HD)
printf(" 10 Mbps half-duplex ");
- if (speed_capa & ETH_LINK_SPEED_10M)
+ if (speed_capa & RTE_ETH_LINK_SPEED_10M)
printf(" 10 Mbps full-duplex ");
- if (speed_capa & ETH_LINK_SPEED_100M_HD)
+ if (speed_capa & RTE_ETH_LINK_SPEED_100M_HD)
printf(" 100 Mbps half-duplex ");
- if (speed_capa & ETH_LINK_SPEED_100M)
+ if (speed_capa & RTE_ETH_LINK_SPEED_100M)
printf(" 100 Mbps full-duplex ");
- if (speed_capa & ETH_LINK_SPEED_1G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_1G)
printf(" 1 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_2_5G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_2_5G)
printf(" 2.5 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_5G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_5G)
printf(" 5 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_10G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_10G)
printf(" 10 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_20G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_20G)
printf(" 20 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_25G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_25G)
printf(" 25 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_40G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_40G)
printf(" 40 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_50G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_50G)
printf(" 50 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_56G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_56G)
printf(" 56 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_100G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_100G)
printf(" 100 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_200G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_200G)
printf(" 200 Gbps ");
}
@@ -636,9 +636,9 @@ port_infos_display(portid_t port_id)
printf("\nLink status: %s\n", (link.link_status) ? ("up") : ("down"));
printf("Link speed: %s\n", rte_eth_link_speed_to_str(link.link_speed));
- printf("Link duplex: %s\n", (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ printf("Link duplex: %s\n", (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
("full-duplex") : ("half-duplex"));
- printf("Autoneg status: %s\n", (link.link_autoneg == ETH_LINK_AUTONEG) ?
+ printf("Autoneg status: %s\n", (link.link_autoneg == RTE_ETH_LINK_AUTONEG) ?
("On") : ("Off"));
if (!rte_eth_dev_get_mtu(port_id, &mtu))
@@ -656,22 +656,22 @@ port_infos_display(portid_t port_id)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (vlan_offload >= 0){
printf("VLAN offload: \n");
- if (vlan_offload & ETH_VLAN_STRIP_OFFLOAD)
+ if (vlan_offload & RTE_ETH_VLAN_STRIP_OFFLOAD)
printf(" strip on, ");
else
printf(" strip off, ");
- if (vlan_offload & ETH_VLAN_FILTER_OFFLOAD)
+ if (vlan_offload & RTE_ETH_VLAN_FILTER_OFFLOAD)
printf("filter on, ");
else
printf("filter off, ");
- if (vlan_offload & ETH_VLAN_EXTEND_OFFLOAD)
+ if (vlan_offload & RTE_ETH_VLAN_EXTEND_OFFLOAD)
printf("extend on, ");
else
printf("extend off, ");
- if (vlan_offload & ETH_QINQ_STRIP_OFFLOAD)
+ if (vlan_offload & RTE_ETH_QINQ_STRIP_OFFLOAD)
printf("qinq strip on\n");
else
printf("qinq strip off\n");
@@ -1166,7 +1166,7 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
diag = rte_eth_dev_set_mtu(port_id, mtu);
if (diag)
fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
- else if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ else if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
/*
* Ether overhead in driver is equal to the difference of
* max_rx_pktlen and max_mtu in rte_eth_dev_info when the
@@ -1175,12 +1175,12 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
if (mtu > RTE_ETHER_MTU) {
rte_port->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
rte_port->dev_conf.rxmode.max_rx_pkt_len =
mtu + eth_overhead;
} else
rte_port->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
}
}
@@ -2767,8 +2767,8 @@ port_rss_reta_info(portid_t port_id,
}
for (i = 0; i < nb_entries; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (!(reta_conf[idx].mask & (1ULL << shift)))
continue;
printf("RSS RETA configuration: hash index=%u, queue=%u\n",
@@ -3118,7 +3118,7 @@ dcb_fwd_config_setup(void)
for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
fwd_lcores[lc_id]->stream_nb = 0;
fwd_lcores[lc_id]->stream_idx = sm_id;
- for (i = 0; i < ETH_MAX_VMDQ_POOL; i++) {
+ for (i = 0; i < RTE_ETH_MAX_VMDQ_POOL; i++) {
/* if the nb_queue is zero, means this tc is
* not enabled on the POOL
*/
@@ -4181,11 +4181,11 @@ vlan_extend_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_VLAN_EXTEND_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ vlan_offload |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
} else {
- vlan_offload &= ~ETH_VLAN_EXTEND_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
+ vlan_offload &= ~RTE_ETH_VLAN_EXTEND_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4211,11 +4211,11 @@ rx_vlan_strip_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
- vlan_offload &= ~ETH_VLAN_STRIP_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ vlan_offload &= ~RTE_ETH_VLAN_STRIP_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4256,11 +4256,11 @@ rx_vlan_filter_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_VLAN_FILTER_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ vlan_offload |= RTE_ETH_VLAN_FILTER_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
} else {
- vlan_offload &= ~ETH_VLAN_FILTER_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ vlan_offload &= ~RTE_ETH_VLAN_FILTER_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4286,11 +4286,11 @@ rx_vlan_qinq_strip_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_QINQ_STRIP_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ vlan_offload |= RTE_ETH_QINQ_STRIP_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
} else {
- vlan_offload &= ~ETH_QINQ_STRIP_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+ vlan_offload &= ~RTE_ETH_QINQ_STRIP_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4360,7 +4360,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
return;
if (ports[port_id].dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_QINQ_INSERT) {
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT) {
fprintf(stderr, "Error, as QinQ has been enabled.\n");
return;
}
@@ -4369,7 +4369,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
if (ret != 0)
return;
- if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VLAN_INSERT) == 0) {
+ if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) == 0) {
fprintf(stderr,
"Error: vlan insert is not supported by port %d\n",
port_id);
@@ -4377,7 +4377,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
}
tx_vlan_reset(port_id);
- ports[port_id].dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+ ports[port_id].dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
ports[port_id].tx_vlan_id = vlan_id;
}
@@ -4396,7 +4396,7 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
if (ret != 0)
return;
- if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_QINQ_INSERT) == 0) {
+ if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) == 0) {
fprintf(stderr,
"Error: qinq insert not supported by port %d\n",
port_id);
@@ -4404,8 +4404,8 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
}
tx_vlan_reset(port_id);
- ports[port_id].dev_conf.txmode.offloads |= (DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT);
+ ports[port_id].dev_conf.txmode.offloads |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
ports[port_id].tx_vlan_id = vlan_id;
ports[port_id].tx_vlan_id_outer = vlan_id_outer;
}
@@ -4414,8 +4414,8 @@ void
tx_vlan_reset(portid_t port_id)
{
ports[port_id].dev_conf.txmode.offloads &=
- ~(DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT);
+ ~(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
ports[port_id].tx_vlan_id = 0;
ports[port_id].tx_vlan_id_outer = 0;
}
@@ -4821,7 +4821,7 @@ set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint16_t rate)
ret = eth_link_get_nowait_print_err(port_id, &link);
if (ret < 0)
return 1;
- if (link.link_speed != ETH_SPEED_NUM_UNKNOWN &&
+ if (link.link_speed != RTE_ETH_SPEED_NUM_UNKNOWN &&
rate > link.link_speed) {
fprintf(stderr,
"Invalid rate value:%u bigger than link speed: %u\n",
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 38cc256533b6..454a2d41c366 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -485,7 +485,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
if (info->l4_proto == IPPROTO_TCP && tso_segsz) {
ol_flags |= PKT_TX_IP_CKSUM;
} else {
- if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
ol_flags |= PKT_TX_IP_CKSUM;
} else {
ipv4_hdr->hdr_checksum = 0;
@@ -502,7 +502,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
udp_hdr = (struct rte_udp_hdr *)((char *)l3_hdr + info->l3_len);
/* do not recalculate udp cksum if it was 0 */
if (udp_hdr->dgram_cksum != 0) {
- if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
ol_flags |= PKT_TX_UDP_CKSUM;
} else {
udp_hdr->dgram_cksum = 0;
@@ -517,7 +517,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
tcp_hdr = (struct rte_tcp_hdr *)((char *)l3_hdr + info->l3_len);
if (tso_segsz)
ol_flags |= PKT_TX_TCP_SEG;
- else if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+ else if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
ol_flags |= PKT_TX_TCP_CKSUM;
} else {
tcp_hdr->cksum = 0;
@@ -532,7 +532,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
((char *)l3_hdr + info->l3_len);
/* sctp payload must be a multiple of 4 to be
* offloaded */
- if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
((ipv4_hdr->total_length & 0x3) == 0)) {
ol_flags |= PKT_TX_SCTP_CKSUM;
} else {
@@ -559,7 +559,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
ipv4_hdr->hdr_checksum = 0;
ol_flags |= PKT_TX_OUTER_IPV4;
- if (tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
ol_flags |= PKT_TX_OUTER_IP_CKSUM;
else
ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
@@ -576,7 +576,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
ol_flags |= PKT_TX_TCP_SEG;
/* Skip SW outer UDP checksum generation if HW supports it */
- if (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) {
if (info->outer_ethertype == _htons(RTE_ETHER_TYPE_IPV4))
udp_hdr->dgram_cksum
= rte_ipv4_phdr_cksum(ipv4_hdr, ol_flags);
@@ -959,9 +959,9 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
if (info.is_tunnel == 1) {
if (info.tunnel_tso_segsz ||
(tx_offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
(tx_offloads &
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
m->outer_l2_len = info.outer_l2_len;
m->outer_l3_len = info.outer_l3_len;
m->l2_len = info.l2_len;
@@ -1022,19 +1022,19 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
rte_be_to_cpu_16(info.outer_ethertype),
info.outer_l3_len);
/* dump tx packet info */
- if ((tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM)) ||
+ if ((tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) ||
info.tso_segsz != 0)
printf("tx: m->l2_len=%d m->l3_len=%d "
"m->l4_len=%d\n",
m->l2_len, m->l3_len, m->l4_len);
if (info.is_tunnel == 1) {
if ((tx_offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
(tx_offloads &
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
(tx_ol_flags & PKT_TX_OUTER_IPV6))
printf("tx: m->outer_l2_len=%d "
"m->outer_l3_len=%d\n",
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 9348618d0f8d..7d658d002cb6 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -100,11 +100,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
vlan_tci_outer = ports[fs->tx_port].tx_vlan_id_outer;
tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ol_flags |= PKT_TX_VLAN_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
ol_flags |= PKT_TX_QINQ_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
ol_flags |= PKT_TX_MACSEC;
for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 0568ea794d48..1d878ba0a694 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -72,11 +72,11 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
fs->rx_packets += nb_rx;
txp = &ports[fs->tx_port];
tx_offloads = txp->dev_conf.txmode.offloads;
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ol_flags = PKT_TX_VLAN_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
ol_flags |= PKT_TX_QINQ_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
ol_flags |= PKT_TX_MACSEC;
for (i = 0; i < nb_rx; i++) {
if (likely(i < nb_rx - 1))
diff --git a/app/test-pmd/macswap_common.h b/app/test-pmd/macswap_common.h
index 7e9a3590a436..7ade9a686b7c 100644
--- a/app/test-pmd/macswap_common.h
+++ b/app/test-pmd/macswap_common.h
@@ -10,11 +10,11 @@ ol_flags_init(uint64_t tx_offload)
{
uint64_t ol_flags = 0;
- ol_flags |= (tx_offload & DEV_TX_OFFLOAD_VLAN_INSERT) ?
+ ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) ?
PKT_TX_VLAN : 0;
- ol_flags |= (tx_offload & DEV_TX_OFFLOAD_QINQ_INSERT) ?
+ ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) ?
PKT_TX_QINQ : 0;
- ol_flags |= (tx_offload & DEV_TX_OFFLOAD_MACSEC_INSERT) ?
+ ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) ?
PKT_TX_MACSEC : 0;
return ol_flags;
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 7c13210f04aa..1d0187723532 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -475,29 +475,29 @@ parse_event_printing_config(const char *optarg, int enable)
static int
parse_link_speed(int n)
{
- uint32_t speed = ETH_LINK_SPEED_FIXED;
+ uint32_t speed = RTE_ETH_LINK_SPEED_FIXED;
switch (n) {
case 1000:
- speed |= ETH_LINK_SPEED_1G;
+ speed |= RTE_ETH_LINK_SPEED_1G;
break;
case 10000:
- speed |= ETH_LINK_SPEED_10G;
+ speed |= RTE_ETH_LINK_SPEED_10G;
break;
case 25000:
- speed |= ETH_LINK_SPEED_25G;
+ speed |= RTE_ETH_LINK_SPEED_25G;
break;
case 40000:
- speed |= ETH_LINK_SPEED_40G;
+ speed |= RTE_ETH_LINK_SPEED_40G;
break;
case 50000:
- speed |= ETH_LINK_SPEED_50G;
+ speed |= RTE_ETH_LINK_SPEED_50G;
break;
case 100000:
- speed |= ETH_LINK_SPEED_100G;
+ speed |= RTE_ETH_LINK_SPEED_100G;
break;
case 200000:
- speed |= ETH_LINK_SPEED_200G;
+ speed |= RTE_ETH_LINK_SPEED_200G;
break;
case 100:
case 10:
@@ -912,13 +912,13 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "pkt-filter-size")) {
if (!strcmp(optarg, "64K"))
fdir_conf.pballoc =
- RTE_FDIR_PBALLOC_64K;
+ RTE_ETH_FDIR_PBALLOC_64K;
else if (!strcmp(optarg, "128K"))
fdir_conf.pballoc =
- RTE_FDIR_PBALLOC_128K;
+ RTE_ETH_FDIR_PBALLOC_128K;
else if (!strcmp(optarg, "256K"))
fdir_conf.pballoc =
- RTE_FDIR_PBALLOC_256K;
+ RTE_ETH_FDIR_PBALLOC_256K;
else
rte_exit(EXIT_FAILURE, "pkt-filter-size %s invalid -"
" must be: 64K or 128K or 256K\n",
@@ -960,34 +960,34 @@ launch_args_parse(int argc, char** argv)
}
#endif
if (!strcmp(lgopts[opt_idx].name, "disable-crc-strip"))
- rx_offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
if (!strcmp(lgopts[opt_idx].name, "enable-lro"))
- rx_offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
if (!strcmp(lgopts[opt_idx].name, "enable-scatter"))
- rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
if (!strcmp(lgopts[opt_idx].name, "enable-rx-cksum"))
- rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
if (!strcmp(lgopts[opt_idx].name,
"enable-rx-timestamp"))
- rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
if (!strcmp(lgopts[opt_idx].name, "enable-hw-vlan"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-vlan-filter"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-vlan-strip"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-vlan-extend"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-qinq-strip"))
- rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
if (!strcmp(lgopts[opt_idx].name, "enable-drop-en"))
rx_drop_en = 1;
@@ -1009,13 +1009,13 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "forward-mode"))
set_pkt_forwarding_mode(optarg);
if (!strcmp(lgopts[opt_idx].name, "rss-ip"))
- rss_hf = ETH_RSS_IP;
+ rss_hf = RTE_ETH_RSS_IP;
if (!strcmp(lgopts[opt_idx].name, "rss-udp"))
- rss_hf = ETH_RSS_UDP;
+ rss_hf = RTE_ETH_RSS_UDP;
if (!strcmp(lgopts[opt_idx].name, "rss-level-inner"))
- rss_hf |= ETH_RSS_LEVEL_INNERMOST;
+ rss_hf |= RTE_ETH_RSS_LEVEL_INNERMOST;
if (!strcmp(lgopts[opt_idx].name, "rss-level-outer"))
- rss_hf |= ETH_RSS_LEVEL_OUTERMOST;
+ rss_hf |= RTE_ETH_RSS_LEVEL_OUTERMOST;
if (!strcmp(lgopts[opt_idx].name, "rxq")) {
n = atoi(optarg);
if (n >= 0 && check_nb_rxq((queueid_t)n) == 0)
@@ -1386,12 +1386,12 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "rx-mq-mode")) {
char *end = NULL;
n = strtoul(optarg, &end, 16);
- if (n >= 0 && n <= ETH_MQ_RX_VMDQ_DCB_RSS)
+ if (n >= 0 && n <= RTE_ETH_MQ_RX_VMDQ_DCB_RSS)
rx_mq_mode = (enum rte_eth_rx_mq_mode)n;
else
rte_exit(EXIT_FAILURE,
"rx-mq-mode must be >= 0 and <= %d\n",
- ETH_MQ_RX_VMDQ_DCB_RSS);
+ RTE_ETH_MQ_RX_VMDQ_DCB_RSS);
}
if (!strcmp(lgopts[opt_idx].name, "record-core-cycles"))
record_core_cycles = 1;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 6cbe9ba3c893..30bf897d6da8 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -337,7 +337,7 @@ uint64_t noisy_lkup_num_reads_writes;
/*
* Receive Side Scaling (RSS) configuration.
*/
-uint64_t rss_hf = ETH_RSS_IP; /* RSS IP by default. */
+uint64_t rss_hf = RTE_ETH_RSS_IP; /* RSS IP by default. */
/*
* Port topology configuration
@@ -454,12 +454,12 @@ struct rte_eth_rxmode rx_mode = {
};
struct rte_eth_txmode tx_mode = {
- .offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
+ .offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
};
-struct rte_fdir_conf fdir_conf = {
+struct rte_eth_fdir_conf fdir_conf = {
.mode = RTE_FDIR_MODE_NONE,
- .pballoc = RTE_FDIR_PBALLOC_64K,
+ .pballoc = RTE_ETH_FDIR_PBALLOC_64K,
.status = RTE_FDIR_REPORT_STATUS,
.mask = {
.vlan_tci_mask = 0xFFEF,
@@ -513,7 +513,7 @@ uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
/*
* hexadecimal bitmask of RX mq mode can be enabled.
*/
-enum rte_eth_rx_mq_mode rx_mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
+enum rte_eth_rx_mq_mode rx_mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
/*
* Used to set forced link speed
@@ -1437,9 +1437,9 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
"Updating jumbo frame offload failed for port %u\n",
pid);
- if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(port->dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
port->dev_conf.txmode.offloads &=
- ~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Apply Rx offloads configuration */
for (i = 0; i < port->dev_info.max_rx_queues; i++)
@@ -1566,8 +1566,8 @@ init_config(void)
init_port_config();
- gso_types = DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_UDP_TSO;
+ gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO;
/*
* Records which Mbuf pool to use by each logical core, if needed.
*/
@@ -3154,7 +3154,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -3414,17 +3414,17 @@ update_jumbo_frame_offload(portid_t portid)
port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU + eth_overhead;
if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU + eth_overhead) {
- rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ rx_offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
on = false;
} else {
- if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
+ if ((port->dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) == 0) {
fprintf(stderr,
"Frame size (%u) is not supported by port %u\n",
port->dev_conf.rxmode.max_rx_pkt_len,
portid);
return -1;
}
- rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
on = true;
}
@@ -3436,16 +3436,16 @@ update_jumbo_frame_offload(portid_t portid)
/* Apply JUMBO_FRAME offload configuration to Rx queue(s) */
for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) {
if (on)
- port->rx_conf[qid].offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ port->rx_conf[qid].offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
- port->rx_conf[qid].offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ port->rx_conf[qid].offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
}
}
/* If JUMBO_FRAME is set MTU conversion done by ethdev layer,
* if unset do it here
*/
- if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) == 0) {
ret = rte_eth_dev_set_mtu(portid,
port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead);
if (ret)
@@ -3486,9 +3486,9 @@ init_port_config(void)
if( port->dev_conf.rx_adv_conf.rss_conf.rss_hf != 0)
port->dev_conf.rxmode.mq_mode =
(enum rte_eth_rx_mq_mode)
- (rx_mq_mode & ETH_MQ_RX_RSS);
+ (rx_mq_mode & RTE_ETH_MQ_RX_RSS);
else
- port->dev_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+ port->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
}
rxtx_port_config(port);
@@ -3575,9 +3575,9 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
vmdq_rx_conf->enable_default_pool = 0;
vmdq_rx_conf->default_pool = 0;
vmdq_rx_conf->nb_queue_pools =
- (num_tcs == ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+ (num_tcs == RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
vmdq_tx_conf->nb_queue_pools =
- (num_tcs == ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+ (num_tcs == RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
vmdq_rx_conf->nb_pool_maps = vmdq_rx_conf->nb_queue_pools;
for (i = 0; i < vmdq_rx_conf->nb_pool_maps; i++) {
@@ -3585,7 +3585,7 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
vmdq_rx_conf->pool_map[i].pools =
1 << (i % vmdq_rx_conf->nb_queue_pools);
}
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
vmdq_rx_conf->dcb_tc[i] = i % num_tcs;
vmdq_tx_conf->dcb_tc[i] = i % num_tcs;
}
@@ -3593,8 +3593,8 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
/* set DCB mode of RX and TX of multiple queues */
eth_conf->rxmode.mq_mode =
(enum rte_eth_rx_mq_mode)
- (rx_mq_mode & ETH_MQ_RX_VMDQ_DCB);
- eth_conf->txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+ (rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB);
+ eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
} else {
struct rte_eth_dcb_rx_conf *rx_conf =
ð_conf->rx_adv_conf.dcb_rx_conf;
@@ -3610,23 +3610,23 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
rx_conf->nb_tcs = num_tcs;
tx_conf->nb_tcs = num_tcs;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
rx_conf->dcb_tc[i] = i % num_tcs;
tx_conf->dcb_tc[i] = i % num_tcs;
}
eth_conf->rxmode.mq_mode =
(enum rte_eth_rx_mq_mode)
- (rx_mq_mode & ETH_MQ_RX_DCB_RSS);
+ (rx_mq_mode & RTE_ETH_MQ_RX_DCB_RSS);
eth_conf->rx_adv_conf.rss_conf = rss_conf;
- eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
+ eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_DCB;
}
if (pfc_en)
eth_conf->dcb_capability_en =
- ETH_DCB_PG_SUPPORT | ETH_DCB_PFC_SUPPORT;
+ RTE_ETH_DCB_PG_SUPPORT | RTE_ETH_DCB_PFC_SUPPORT;
else
- eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
+ eth_conf->dcb_capability_en = RTE_ETH_DCB_PG_SUPPORT;
return 0;
}
@@ -3653,7 +3653,7 @@ init_port_dcb_config(portid_t pid,
retval = get_eth_dcb_conf(pid, &port_conf, dcb_mode, num_tcs, pfc_en);
if (retval < 0)
return retval;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
/* re-configure the device . */
retval = rte_eth_dev_configure(pid, nb_rxq, nb_rxq, &port_conf);
@@ -3703,7 +3703,7 @@ init_port_dcb_config(portid_t pid,
rxtx_port_config(rte_port);
/* VLAN filter */
- rte_port->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rte_port->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
for (i = 0; i < RTE_DIM(vlan_tags); i++)
rx_vft_set(pid, vlan_tags[i], 1);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 16a3598e48c5..e4ad8a6a7cff 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -446,7 +446,7 @@ extern lcoreid_t bitrate_lcore_id;
extern uint8_t bitrate_enabled;
#endif
-extern struct rte_fdir_conf fdir_conf;
+extern struct rte_eth_fdir_conf fdir_conf;
/*
* Configuration of packet segments used to scatter received packets
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index aed820f5d340..5409d7a0deb0 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -352,11 +352,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
tx_offloads = txp->dev_conf.txmode.offloads;
vlan_tci = txp->tx_vlan_id;
vlan_tci_outer = txp->tx_vlan_id_outer;
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ol_flags = PKT_TX_VLAN_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
ol_flags |= PKT_TX_QINQ_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
ol_flags |= PKT_TX_MACSEC;
/*
diff --git a/app/test/test_ethdev_link.c b/app/test/test_ethdev_link.c
index ee11987bae28..6248aea49abd 100644
--- a/app/test/test_ethdev_link.c
+++ b/app/test/test_ethdev_link.c
@@ -14,10 +14,10 @@ test_link_status_up_default(void)
{
int ret = 0;
struct rte_eth_link link_status = {
- .link_speed = ETH_SPEED_NUM_2_5G,
- .link_status = ETH_LINK_UP,
- .link_autoneg = ETH_LINK_AUTONEG,
- .link_duplex = ETH_LINK_FULL_DUPLEX
+ .link_speed = RTE_ETH_SPEED_NUM_2_5G,
+ .link_status = RTE_ETH_LINK_UP,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX
};
char text[RTE_ETH_LINK_MAX_STR_LEN];
@@ -27,9 +27,9 @@ test_link_status_up_default(void)
TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg",
text, strlen(text), "Invalid default link status string");
- link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
- link_status.link_autoneg = ETH_LINK_FIXED;
- link_status.link_speed = ETH_SPEED_NUM_10M,
+ link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link_status.link_autoneg = RTE_ETH_LINK_FIXED;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_10M;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #2: %s\n", text);
RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -37,7 +37,7 @@ test_link_status_up_default(void)
text, strlen(text), "Invalid default link status "
"string with HDX");
- link_status.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #3: %s\n", text);
RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -45,7 +45,7 @@ test_link_status_up_default(void)
text, strlen(text), "Invalid default link status "
"string with HDX");
- link_status.link_speed = ETH_SPEED_NUM_NONE;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_NONE;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #3: %s\n", text);
RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -54,9 +54,9 @@ test_link_status_up_default(void)
"string with HDX");
/* test max str len */
- link_status.link_speed = ETH_SPEED_NUM_200G;
- link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
- link_status.link_autoneg = ETH_LINK_AUTONEG;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_200G;
+ link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link_status.link_autoneg = RTE_ETH_LINK_AUTONEG;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #4:len = %d, %s\n", ret, text);
RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
@@ -69,10 +69,10 @@ test_link_status_down_default(void)
{
int ret = 0;
struct rte_eth_link link_status = {
- .link_speed = ETH_SPEED_NUM_2_5G,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_AUTONEG,
- .link_duplex = ETH_LINK_FULL_DUPLEX
+ .link_speed = RTE_ETH_SPEED_NUM_2_5G,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX
};
char text[RTE_ETH_LINK_MAX_STR_LEN];
@@ -90,9 +90,9 @@ test_link_status_invalid(void)
int ret = 0;
struct rte_eth_link link_status = {
.link_speed = 55555,
- .link_status = ETH_LINK_UP,
- .link_autoneg = ETH_LINK_AUTONEG,
- .link_duplex = ETH_LINK_FULL_DUPLEX
+ .link_status = RTE_ETH_LINK_UP,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX
};
char text[RTE_ETH_LINK_MAX_STR_LEN];
@@ -116,21 +116,21 @@ test_link_speed_all_values(void)
const char *value;
uint32_t link_speed;
} speed_str_map[] = {
- { "None", ETH_SPEED_NUM_NONE },
- { "10 Mbps", ETH_SPEED_NUM_10M },
- { "100 Mbps", ETH_SPEED_NUM_100M },
- { "1 Gbps", ETH_SPEED_NUM_1G },
- { "2.5 Gbps", ETH_SPEED_NUM_2_5G },
- { "5 Gbps", ETH_SPEED_NUM_5G },
- { "10 Gbps", ETH_SPEED_NUM_10G },
- { "20 Gbps", ETH_SPEED_NUM_20G },
- { "25 Gbps", ETH_SPEED_NUM_25G },
- { "40 Gbps", ETH_SPEED_NUM_40G },
- { "50 Gbps", ETH_SPEED_NUM_50G },
- { "56 Gbps", ETH_SPEED_NUM_56G },
- { "100 Gbps", ETH_SPEED_NUM_100G },
- { "200 Gbps", ETH_SPEED_NUM_200G },
- { "Unknown", ETH_SPEED_NUM_UNKNOWN },
+ { "None", RTE_ETH_SPEED_NUM_NONE },
+ { "10 Mbps", RTE_ETH_SPEED_NUM_10M },
+ { "100 Mbps", RTE_ETH_SPEED_NUM_100M },
+ { "1 Gbps", RTE_ETH_SPEED_NUM_1G },
+ { "2.5 Gbps", RTE_ETH_SPEED_NUM_2_5G },
+ { "5 Gbps", RTE_ETH_SPEED_NUM_5G },
+ { "10 Gbps", RTE_ETH_SPEED_NUM_10G },
+ { "20 Gbps", RTE_ETH_SPEED_NUM_20G },
+ { "25 Gbps", RTE_ETH_SPEED_NUM_25G },
+ { "40 Gbps", RTE_ETH_SPEED_NUM_40G },
+ { "50 Gbps", RTE_ETH_SPEED_NUM_50G },
+ { "56 Gbps", RTE_ETH_SPEED_NUM_56G },
+ { "100 Gbps", RTE_ETH_SPEED_NUM_100G },
+ { "200 Gbps", RTE_ETH_SPEED_NUM_200G },
+ { "Unknown", RTE_ETH_SPEED_NUM_UNKNOWN },
{ "Invalid", 50505 }
};
diff --git a/app/test/test_event_eth_rx_adapter.c b/app/test/test_event_eth_rx_adapter.c
index 9198767b4194..bb7917010d62 100644
--- a/app/test/test_event_eth_rx_adapter.c
+++ b/app/test/test_event_eth_rx_adapter.c
@@ -106,7 +106,7 @@ port_init_rx_intr(uint16_t port, struct rte_mempool *mp)
{
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
},
.intr_conf = {
.rxq = 1,
@@ -121,7 +121,7 @@ port_init(uint16_t port, struct rte_mempool *mp)
{
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
},
};
diff --git a/app/test/test_kni.c b/app/test/test_kni.c
index 96733554b6c4..40ab0d5c4ca4 100644
--- a/app/test/test_kni.c
+++ b/app/test/test_kni.c
@@ -74,7 +74,7 @@ static const struct rte_eth_txconf tx_conf = {
static const struct rte_eth_conf port_conf = {
.txmode = {
- .mq_mode = ETH_DCB_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 8a5c8310a8b4..23c024aa1b0c 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -134,12 +134,12 @@ static uint16_t vlan_id = 0x100;
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 2c835fa7adc7..1556f14d6921 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -107,12 +107,12 @@ static struct link_bonding_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 5dac60ca1edd..cdf1c4fd259d 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -52,7 +52,7 @@ struct slave_conf {
struct rte_eth_rss_conf rss_conf;
uint8_t rss_key[40];
- struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
uint8_t is_slave;
struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
@@ -61,7 +61,7 @@ struct slave_conf {
struct link_bonding_rssconf_unittest_params {
uint8_t bond_port_id;
struct rte_eth_dev_info bond_dev_info;
- struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
struct slave_conf slave_ports[SLAVE_COUNT];
struct rte_mempool *mbuf_pool;
@@ -80,29 +80,29 @@ static struct link_bonding_rssconf_unittest_params test_params = {
*/
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
static struct rte_eth_conf rss_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IPV6,
+ .rss_hf = RTE_ETH_RSS_IPV6,
},
},
.lpbk_mode = 0,
@@ -209,13 +209,13 @@ bond_slaves(void)
static int
reta_set(uint16_t port_id, uint8_t value, int reta_size)
{
- struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_ETH_RETA_GROUP_SIZE];
int i, j;
- for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++) {
+ for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++) {
/* select all fields to set */
reta_conf[i].mask = ~0LL;
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
reta_conf[i].reta[j] = value;
}
@@ -234,8 +234,8 @@ reta_check_synced(struct slave_conf *port)
for (i = 0; i < test_params.bond_dev_info.reta_size;
i++) {
- int index = i / RTE_RETA_GROUP_SIZE;
- int shift = i % RTE_RETA_GROUP_SIZE;
+ int index = i / RTE_ETH_RETA_GROUP_SIZE;
+ int shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (port->reta_conf[index].reta[shift] !=
test_params.bond_reta_conf[index].reta[shift])
@@ -253,7 +253,7 @@ static int
bond_reta_fetch(void) {
unsigned j;
- for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_RETA_GROUP_SIZE;
+ for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE;
j++)
test_params.bond_reta_conf[j].mask = ~0LL;
@@ -270,7 +270,7 @@ static int
slave_reta_fetch(struct slave_conf *port) {
unsigned j;
- for (j = 0; j < port->dev_info.reta_size / RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
port->reta_conf[j].mask = ~0LL;
TEST_ASSERT_SUCCESS(rte_eth_dev_rss_reta_query(port->port_id,
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index 3a248d512c4a..da7b7ad1f7cc 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -62,12 +62,12 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 1, /* enable loopback */
};
@@ -156,7 +156,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -823,7 +823,7 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
/* bulk alloc rx, full-featured tx */
tx_conf.tx_rs_thresh = 32;
tx_conf.tx_free_thresh = 32;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
return 0;
} else if (!strcmp(mode, "hybrid")) {
/* bulk alloc rx, vector tx
@@ -832,13 +832,13 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
*/
tx_conf.tx_rs_thresh = 32;
tx_conf.tx_free_thresh = 32;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
return 0;
} else if (!strcmp(mode, "full")) {
/* full feature rx,tx pair */
tx_conf.tx_rs_thresh = 32;
tx_conf.tx_free_thresh = 32;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
return 0;
}
diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c
index 7036f401ed95..6eecfa385537 100644
--- a/app/test/virtual_pmd.c
+++ b/app/test/virtual_pmd.c
@@ -53,7 +53,7 @@ static int virtual_ethdev_stop(struct rte_eth_dev *eth_dev __rte_unused)
void *pkt = NULL;
struct virtual_ethdev_private *prv = eth_dev->data->dev_private;
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 0;
while (rte_ring_dequeue(prv->rx_queue, &pkt) != -ENOENT)
rte_pktmbuf_free(pkt);
@@ -178,7 +178,7 @@ virtual_ethdev_link_update_success(struct rte_eth_dev *bonded_eth_dev,
int wait_to_complete __rte_unused)
{
if (!bonded_eth_dev->data->dev_started)
- bonded_eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ bonded_eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -574,9 +574,9 @@ virtual_ethdev_create(const char *name, struct rte_ether_addr *mac_addr,
eth_dev->data->nb_rx_queues = (uint16_t)1;
eth_dev->data->nb_tx_queues = (uint16_t)1;
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
- eth_dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
- eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
eth_dev->data->mac_addrs = rte_zmalloc(name, RTE_ETHER_ADDR_LEN, 0);
if (eth_dev->data->mac_addrs == NULL)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 53560d3830d7..1c0ea988f239 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -42,7 +42,7 @@ Features of the OCTEON cnxk SSO PMD are:
- HW managed packets enqueued from ethdev to eventdev exposed through event eth
RX adapter.
- N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
capability while maintaining receive packet order.
- Full Rx/Tx offload support defined through ethdev queue configuration.
- HW managed event vectorization on CN10K for packets enqueued from ethdev to
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 11fbebfcd243..0fa57abfa3e0 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -35,7 +35,7 @@ Features of the OCTEON TX2 SSO PMD are:
- HW managed packets enqueued from ethdev to eventdev exposed through event eth
RX adapter.
- N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
capability while maintaining receive packet order.
- Full Rx/Tx offload support defined through ethdev queue config.
diff --git a/doc/guides/howto/debug_troubleshoot.rst b/doc/guides/howto/debug_troubleshoot.rst
index 457ac441429a..13f30e39363e 100644
--- a/doc/guides/howto/debug_troubleshoot.rst
+++ b/doc/guides/howto/debug_troubleshoot.rst
@@ -71,7 +71,7 @@ RX Port and associated core :numref:`dtg_rx_rate`.
* Identify if port Speed and Duplex is matching to desired values with
``rte_eth_link_get``.
- * Check ``DEV_RX_OFFLOAD_JUMBO_FRAME`` is set with ``rte_eth_dev_info_get``.
+ * Check ``RTE_ETH_RX_OFFLOAD_JUMBO_FRAME`` is set with ``rte_eth_dev_info_get``.
* Check promiscuous mode if the drops do not occur for unique MAC address
with ``rte_eth_promiscuous_get``.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index e75f4fa9e3bc..77827e750195 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -877,22 +877,22 @@ processing. This improved performance is derived from a number of optimizations:
* TX: only the following reduced set of transmit offloads is supported in
vector mode::
- DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
* RX: only the following reduced set of receive offloads is supported in
vector mode (note that jumbo MTU is allowed only when the MTU setting
- does not require `DEV_RX_OFFLOAD_SCATTER` to be enabled)::
-
- DEV_RX_OFFLOAD_VLAN_STRIP
- DEV_RX_OFFLOAD_KEEP_CRC
- DEV_RX_OFFLOAD_JUMBO_FRAME
- DEV_RX_OFFLOAD_IPV4_CKSUM
- DEV_RX_OFFLOAD_UDP_CKSUM
- DEV_RX_OFFLOAD_TCP_CKSUM
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM
- DEV_RX_OFFLOAD_RSS_HASH
- DEV_RX_OFFLOAD_VLAN_FILTER
+ does not require `RTE_ETH_RX_OFFLOAD_SCATTER` to be enabled)::
+
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+ RTE_ETH_RX_OFFLOAD_RSS_HASH
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER
The BNXT Vector PMD is enabled in DPDK builds by default. The decision to enable
vector processing is made at run-time when the port is started; if no transmit
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index 91bdcd065a95..0209730b904a 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -432,7 +432,7 @@ Limitations
.. code-block:: console
vlan_offload = rte_eth_dev_get_vlan_offload(port);
- vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
+ vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
rte_eth_dev_set_vlan_offload(port, vlan_offload);
Another alternative is modify the adapter's ingress VLAN rewrite mode so that
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index a96e12d15515..7f7d6ae45658 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -30,7 +30,7 @@ Speed capabilities
Supports getting the speed capabilities that the current device is capable of.
-* **[provides] rte_eth_dev_info**: ``speed_capa:ETH_LINK_SPEED_*``.
+* **[provides] rte_eth_dev_info**: ``speed_capa:RTE_ETH_LINK_SPEED_*``.
* **[related] API**: ``rte_eth_dev_info_get()``.
@@ -101,11 +101,11 @@ Supports Rx interrupts.
Lock-free Tx queue
------------------
-If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+If a PMD advertises RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
* **[related] API**: ``rte_eth_tx_burst()``.
@@ -117,8 +117,8 @@ Fast mbuf free
Supports optimization for fast release of mbufs following successful Tx.
Requires that per queue, all mbufs come from the same mempool and has refcnt = 1.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
.. _nic_features_free_tx_mbuf_on_demand:
@@ -165,7 +165,7 @@ Jumbo frame
Supports Rx jumbo frames.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_JUMBO_FRAME``.
``dev_conf.rxmode.max_rx_pkt_len``.
* **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
* **[related] API**: ``rte_eth_dev_set_mtu()``.
@@ -178,7 +178,7 @@ Scattered Rx
Supports receiving segmented mbufs.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SCATTER``.
* **[implements] datapath**: ``Scattered Rx function``.
* **[implements] rte_eth_dev_data**: ``scattered_rx``.
* **[provides] eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -206,12 +206,12 @@ LRO
Supports Large Receive Offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
``dev_conf.rxmode.max_lro_pkt_size``.
* **[implements] datapath**: ``LRO functionality``.
* **[implements] rte_eth_dev_data**: ``lro``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
* **[provides] rte_eth_dev_info**: ``max_lro_pkt_size``.
@@ -222,12 +222,12 @@ TSO
Supports TCP Segmentation Offloading.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_TCP_TSO``.
* **[uses] rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
* **[uses] mbuf**: ``mbuf.ol_flags:`` ``PKT_TX_TCP_SEG``, ``PKT_TX_IPV4``, ``PKT_TX_IPV6``, ``PKT_TX_IP_CKSUM``.
* **[uses] mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
* **[implements] datapath**: ``TSO functionality``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_TCP_TSO,RTE_ETH_TX_OFFLOAD_UDP_TSO``.
.. _nic_features_promiscuous_mode:
@@ -288,9 +288,9 @@ RSS hash
Supports RSS hashing on RX.
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_RSS_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_RSS_FLAG``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.rss_conf``.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
* **[provides] rte_eth_dev_info**: ``flow_type_rss_offloads``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
@@ -303,7 +303,7 @@ Inner RSS
Supports RX RSS hashing on Inner headers.
* **[uses] rte_flow_action_rss**: ``level``.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
@@ -340,7 +340,7 @@ VMDq
Supports Virtual Machine Device Queues (VMDq).
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_VMDQ_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_VMDQ_FLAG``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_rx_conf``.
* **[uses] user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -363,7 +363,7 @@ DCB
Supports Data Center Bridging (DCB).
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_DCB_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_DCB_FLAG``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.dcb_rx_conf``.
* **[uses] user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -379,7 +379,7 @@ VLAN filter
Supports filtering of a VLAN Tag identifier.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_FILTER``.
* **[implements] eth_dev_ops**: ``vlan_filter_set``.
* **[related] API**: ``rte_eth_dev_vlan_filter()``.
@@ -428,12 +428,12 @@ Supports inline crypto processing defined by rte_security library to perform cry
operations of security protocol while packet is received in NIC. NIC is not aware
of protocol operations. See Security library and PMD documentation for more details.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[implements] rte_security_ops**: ``session_create``, ``session_update``,
``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
* **[provides] rte_security_ops, capabilities_get**: ``action: RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO``
@@ -449,13 +449,13 @@ protocol processing for the security protocol (e.g. IPsec, MACSEC) while the
packet is received at NIC. The NIC is capable of understanding the security
protocol operations. See security library and PMD documentation for more details.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[implements] rte_security_ops**: ``session_create``, ``session_update``,
``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``get_userdata``,
``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
* **[provides] rte_security_ops, capabilities_get**: ``action: RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL``
@@ -469,7 +469,7 @@ CRC offload
Supports CRC stripping by hardware.
A PMD assumed to support CRC stripping by default. PMD should advertise if it supports keeping CRC.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_KEEP_CRC``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_KEEP_CRC``.
.. _nic_features_vlan_offload:
@@ -479,13 +479,13 @@ VLAN offload
Supports VLAN offload to hardware.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_STRIP,RTE_ETH_RX_OFFLOAD_VLAN_FILTER,RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_VLAN``, ``mbuf.vlan_tci``.
* **[implements] eth_dev_ops**: ``vlan_offload_set``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN`` ``mbuf.vlan_tci``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_VLAN_STRIP``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
* **[related] API**: ``rte_eth_dev_set_vlan_offload()``,
``rte_eth_dev_get_vlan_offload()``.
@@ -497,14 +497,14 @@ QinQ offload
Supports QinQ (queue in queue) offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ``, ``mbuf.vlan_tci_outer``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:PKT_RX_QINQ``,
``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN``
``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
.. _nic_features_fec:
@@ -518,7 +518,7 @@ information to correct the bit errors generated during data packet transmission
improves signal quality but also brings a delay to signals. This function can be enabled or disabled as required.
* **[implements] eth_dev_ops**: ``fec_get_capability``, ``fec_get``, ``fec_set``.
-* **[provides] rte_eth_fec_capa**: ``speed:ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
+* **[provides] rte_eth_fec_capa**: ``speed:RTE_ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
* **[related] API**: ``rte_eth_fec_get_capability()``, ``rte_eth_fec_get()``, ``rte_eth_fec_set()``.
@@ -529,16 +529,16 @@ L3 checksum offload
Supports L3 checksum offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
* **[uses] mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
.. _nic_features_l4_checksum_offload:
@@ -548,8 +548,8 @@ L4 checksum offload
Supports L4 checksum offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -557,8 +557,8 @@ Supports L4 checksum offload.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
.. _nic_features_hw_timestamp:
@@ -567,10 +567,10 @@ Timestamp offload
Supports Timestamp.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_TIMESTAMP``.
* **[provides] mbuf**: ``mbuf.timestamp``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
* **[related] eth_dev_ops**: ``read_clock``.
.. _nic_features_macsec_offload:
@@ -580,11 +580,11 @@ MACsec offload
Supports MACsec.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
.. _nic_features_inner_l3_checksum:
@@ -594,16 +594,16 @@ Inner L3 checksum
Supports inner packet L3 checksum.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
* **[uses] mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_IP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
.. _nic_features_inner_l4_checksum:
@@ -613,15 +613,15 @@ Inner L4 checksum
Supports inner packet L4 checksum.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_L4_CKSUM_UNKNOWN`` |
``PKT_RX_OUTER_L4_CKSUM_BAD`` | ``PKT_RX_OUTER_L4_CKSUM_GOOD`` | ``PKT_RX_OUTER_L4_CKSUM_INVALID``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
``mbuf.ol_flags:PKT_TX_OUTER_UDP_CKSUM``.
* **[uses] mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
.. _nic_features_packet_type_parsing:
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index 7b8ef0e7823d..3dff65d89b6d 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -78,11 +78,11 @@ To enable via ``RX_OLFLAGS`` use ``RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y``.
To guarantee the constraint, the following capabilities in ``dev_conf.rxmode.offloads``
will be checked:
-* ``DEV_RX_OFFLOAD_VLAN_EXTEND``
+* ``RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``
-* ``DEV_RX_OFFLOAD_CHECKSUM``
+* ``RTE_ETH_RX_OFFLOAD_CHECKSUM``
-* ``DEV_RX_OFFLOAD_HEADER_SPLIT``
+* ``RTE_ETH_RX_OFFLOAD_HEADER_SPLIT``
* ``fdir_conf->mode``
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index fcea8151bf3c..e60e3b2a761d 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -222,21 +222,21 @@ For example,
* If the max number of VFs (max_vfs) is set in the range of 1 to 32:
If the number of Rx queues is specified as 4 (``--rxq=4`` in testpmd), then there are totally 32
- pools (ETH_32_POOLS), and each VF could have 4 Rx queues;
+ pools (RTE_ETH_32_POOLS), and each VF could have 4 Rx queues;
If the number of Rx queues is specified as 2 (``--rxq=2`` in testpmd), then there are totally 32
- pools (ETH_32_POOLS), and each VF could have 2 Rx queues;
+ pools (RTE_ETH_32_POOLS), and each VF could have 2 Rx queues;
* If the max number of VFs (max_vfs) is in the range of 33 to 64:
If the number of Rx queues in specified as 4 (``--rxq=4`` in testpmd), then error message is expected
as ``rxq`` is not correct at this case;
- If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (ETH_64_POOLS),
+ If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (RTE_ETH_64_POOLS),
and each VF have 2 Rx queues;
- On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
- or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
+ On host, to enable VF RSS functionality, rx mq mode should be set as RTE_ETH_MQ_RX_VMDQ_RSS
+ or RTE_ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
It also needs config VF RSS information like hash function, RSS key, RSS key length.
.. note::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index b82e63438285..24fbccc982f5 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -69,13 +69,13 @@ Other features are supported using optional MACRO configuration. They include:
To guarantee the constraint, capabilities in dev_conf.rxmode.offloads will be checked:
-* DEV_RX_OFFLOAD_VLAN_STRIP
+* RTE_ETH_RX_OFFLOAD_VLAN_STRIP
-* DEV_RX_OFFLOAD_VLAN_EXTEND
+* RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
-* DEV_RX_OFFLOAD_CHECKSUM
+* RTE_ETH_RX_OFFLOAD_CHECKSUM
-* DEV_RX_OFFLOAD_HEADER_SPLIT
+* RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
* dev_conf
@@ -143,13 +143,13 @@ l3fwd
~~~~~
When running l3fwd with vPMD, there is one thing to note.
-In the configuration, ensure that DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
+In the configuration, ensure that RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
Otherwise, by default, RX vPMD is disabled.
load_balancer
~~~~~~~~~~~~~
-As in the case of l3fwd, to enable vPMD, do NOT set DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
+As in the case of l3fwd, to enable vPMD, do NOT set RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
In addition, for improved performance, use -bsz "(32,32),(64,64),(32,32)" in load_balancer to avoid using the default burst size of 144.
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index bae73f42d882..6facb68b9545 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -371,7 +371,7 @@ Limitations
- CRC:
- - ``DEV_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
+ - ``RTE_ETH_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
for some NICs (such as ConnectX-6 Dx, ConnectX-6 Lx, and BlueField-2).
The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support.
@@ -607,7 +607,7 @@ Driver options
small-packet traffic.
When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
- user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
+ user-provided mbuf even if RTE_ETH_RX_OFFLOAD_SCATTER isn't enabled. PMD will
configure large stride size enough to accommodate max_rx_pkt_len as long as
device allows. Note that this can waste system memory compared to enabling Rx
scatter and multi-segment packet.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 3ce696b605d1..681010d9ed7d 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -275,7 +275,7 @@ An example utility for eBPF instruction generation in the format of C arrays wil
be added in next releases
TAP reports on supported RSS functions as part of dev_infos_get callback:
-``ETH_RSS_IP``, ``ETH_RSS_UDP`` and ``ETH_RSS_TCP``.
+``RTE_ETH_RSS_IP``, ``RTE_ETH_RSS_UDP`` and ``RTE_ETH_RSS_TCP``.
**Known limitation:** TAP supports all of the above hash functions together
and not in partial combinations.
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index 7bff0aef0b74..9b2c31a2f0bc 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -194,11 +194,11 @@ To segment an outgoing packet, an application must:
- the bit mask of required GSO types. The GSO library uses the same macros as
those that describe a physical device's TX offloading capabilities (i.e.
- ``DEV_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
+ ``RTE_ETH_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
wants to segment TCP/IPv4 packets, it should set gso_types to
- ``DEV_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
- supported for gso_types are ``DEV_TX_OFFLOAD_VXLAN_TNL_TSO``, and
- ``DEV_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
+ ``RTE_ETH_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
+ supported for gso_types are ``RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO``, and
+ ``RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
allowed.
- a flag, that indicates whether the IPv4 headers of output segments should
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 2f190b40e43a..dc6186a44ae2 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -137,7 +137,7 @@ a vxlan-encapsulated tcp packet:
mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
set out_ip checksum to 0 in the packet
- This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
- calculate checksum of out_ip and out_udp::
@@ -147,8 +147,8 @@ a vxlan-encapsulated tcp packet:
set out_ip checksum to 0 in the packet
set out_udp checksum to pseudo header using rte_ipv4_phdr_cksum()
- This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM
- and DEV_TX_OFFLOAD_UDP_CKSUM.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+ and RTE_ETH_TX_OFFLOAD_UDP_CKSUM.
- calculate checksum of in_ip::
@@ -158,7 +158,7 @@ a vxlan-encapsulated tcp packet:
set in_ip checksum to 0 in the packet
This is similar to case 1), but l2_len is different. It is supported
- on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+ on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
Note that it can only work if outer L4 checksum is 0.
- calculate checksum of in_ip and in_tcp::
@@ -170,8 +170,8 @@ a vxlan-encapsulated tcp packet:
set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
This is similar to case 2), but l2_len is different. It is supported
- on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM and
- DEV_TX_OFFLOAD_TCP_CKSUM.
+ on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM and
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM.
Note that it can only work if outer L4 checksum is 0.
- segment inner TCP::
@@ -185,7 +185,7 @@ a vxlan-encapsulated tcp packet:
set in_tcp checksum to pseudo header without including the IP
payload length using rte_ipv4_phdr_cksum()
- This is supported on hardware advertising DEV_TX_OFFLOAD_TCP_TSO.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_TCP_TSO.
Note that it can only work if outer L4 checksum is 0.
- calculate checksum of out_ip, in_ip, in_tcp::
@@ -200,8 +200,8 @@ a vxlan-encapsulated tcp packet:
set in_ip checksum to 0 in the packet
set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
- This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM,
- DEV_TX_OFFLOAD_UDP_CKSUM and DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM and RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM.
The list of flags and their precise meaning is described in the mbuf API
documentation (rte_mbuf.h). Also refer to the testpmd source code
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 0d4ac77a7ccf..68312898448c 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -57,7 +57,7 @@ Whenever needed and appropriate, asynchronous communication should be introduced
Avoiding lock contention is a key issue in a multi-core environment.
To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable.
+For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
@@ -119,7 +119,7 @@ This is also true for the pipe-line model provided all logical cores used are lo
Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
-If the PMD is ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
+If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases:
* Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
@@ -127,7 +127,7 @@ concurrently on the same tx queue without SW lock. This PMD feature found in som
* In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus
enables more scaling as all workers can send the packets.
-See `Hardware Offload`_ for ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
+See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
Device Identification, Ownership and Configuration
--------------------------------------------------
@@ -311,7 +311,7 @@ The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get(
The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
Supported offloads can be either per-port or per-queue.
-Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
+Offloads are enabled using the existing ``RTE_ETH_TX_OFFLOAD_*`` or ``RTE_ETH_RX_OFFLOAD_*`` flags.
Any requested offloading by an application must be within the device capabilities.
Any offloading is disabled by default if it is not set in the parameter
``dev_conf->[rt]xmode.offloads`` to ``rte_eth_dev_configure()`` and
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2b42d5ec8c05..1bac8f04b96e 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1835,23 +1835,23 @@ only matching traffic goes through.
.. table:: RSS
- +---------------+---------------------------------------------+
- | Field | Value |
- +===============+=============================================+
- | ``func`` | RSS hash function to apply |
- +---------------+---------------------------------------------+
- | ``level`` | encapsulation level for ``types`` |
- +---------------+---------------------------------------------+
- | ``types`` | specific RSS hash types (see ``ETH_RSS_*``) |
- +---------------+---------------------------------------------+
- | ``key_len`` | hash key length in bytes |
- +---------------+---------------------------------------------+
- | ``queue_num`` | number of entries in ``queue`` |
- +---------------+---------------------------------------------+
- | ``key`` | hash key |
- +---------------+---------------------------------------------+
- | ``queue`` | queue indices to use |
- +---------------+---------------------------------------------+
+ +---------------+-------------------------------------------------+
+ | Field | Value |
+ +===============+=================================================+
+ | ``func`` | RSS hash function to apply |
+ +---------------+-------------------------------------------------+
+ | ``level`` | encapsulation level for ``types`` |
+ +---------------+-------------------------------------------------+
+ | ``types`` | specific RSS hash types (see ``RTE_ETH_RSS_*``) |
+ +---------------+-------------------------------------------------+
+ | ``key_len`` | hash key length in bytes |
+ +---------------+-------------------------------------------------+
+ | ``queue_num`` | number of entries in ``queue`` |
+ +---------------+-------------------------------------------------+
+ | ``key`` | hash key |
+ +---------------+-------------------------------------------------+
+ | ``queue`` | queue indices to use |
+ +---------------+-------------------------------------------------+
Action: ``PF``
^^^^^^^^^^^^^^
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index f72bc8a78fa6..e3bd451917f0 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -560,7 +560,7 @@ created by the application is attached to the security session by the API
For Inline Crypto and Inline protocol offload, device specific defined metadata is
updated in the mbuf using ``rte_security_set_pkt_metadata()`` if
-``DEV_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
+``RTE_ETH_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
For inline protocol offloaded ingress traffic, the application can register a
pointer, ``userdata`` , in the security session. When the packet is received,
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b0b..20159a1c9a90 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -58,22 +58,16 @@ Deprecation Notices
``RTE_ETH_FLOW_MAX`` is one sample of the mentioned case, adding a new flow
type will break the ABI because of ``flex_mask[RTE_ETH_FLOW_MAX]`` array
usage in following public struct hierarchy:
- ``rte_eth_fdir_flex_conf -> rte_fdir_conf -> rte_eth_conf (in the middle)``.
+ ``rte_eth_fdir_flex_conf -> rte_eth_fdir_conf -> rte_eth_conf (in the middle)``.
Need to identify this kind of usages and fix in 20.11, otherwise this blocks
us extending existing enum/define.
One solution can be using a fixed size array instead of ``.*MAX.*`` value.
-* ethdev: Will add ``RTE_ETH_`` prefix to all ethdev macros/enums in v21.11.
- Macros will be added for backward compatibility.
- Backward compatibility macros will be removed on v22.11.
- A few old backward compatibility macros from 2013 that does not have
- proper prefix will be removed on v21.11.
-
* ethdev: The flow director API, including ``rte_eth_conf.fdir_conf`` field,
and the related structures (``rte_fdir_*`` and ``rte_eth_fdir_*``),
will be removed in DPDK 20.11.
-* ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
+* ethdev: New offload flags ``RTE_ETH_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
This will allow application to enable or disable PMDs from updating
``rte_mbuf::hash::fdir``.
This scheme will allow PMDs to avoid writes to ``rte_mbuf`` fields on Rx and
@@ -98,7 +92,7 @@ Deprecation Notices
either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``.
An application may need to configure device for a specific Rx packet size, like for
- cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received packet size
+ cases ``RTE_ETH_RX_OFFLOAD_SCATTER`` is not supported and device received packet size
can't be bigger than Rx buffer size.
To cover these cases an application needs to know the device packet overhead to be
able to calculate the ``mtu`` corresponding to a Rx buffer size, for this
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d707a554efaf..daff4de36a76 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -100,6 +100,9 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* ethdev: All enums & macros updated to have ``RTE_ETH`` prefix and structures
+ updated to have ``rte_eth`` prefix. DPDK components updated to use new names.
+
Known Issues
------------
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 78171b25f96e..782574dd39d5 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -209,12 +209,12 @@ Where:
device will ensure the ordering. Ordering will be lost when tried in PARALLEL.
* ``--rxoffload MASK``: RX HW offload capabilities to enable/use on this port
- (bitmask of DEV_RX_OFFLOAD_* values). It is an optional parameter and
+ (bitmask of RTE_ETH_RX_OFFLOAD_* values). It is an optional parameter and
allows user to disable some of the RX HW offload capabilities.
By default all HW RX offloads are enabled.
* ``--txoffload MASK``: TX HW offload capabilities to enable/use on this port
- (bitmask of DEV_TX_OFFLOAD_* values). It is an optional parameter and
+ (bitmask of RTE_ETH_TX_OFFLOAD_* values). It is an optional parameter and
allows user to disable some of the TX HW offload capabilities.
By default all HW TX offloads are enabled.
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 6061674239f4..d7f5951d4639 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -526,7 +526,7 @@ The command line options are:
Set the hexadecimal bitmask of RX multi queue mode which can be enabled.
The default value is 0x7::
- ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG | ETH_MQ_RX_VMDQ_FLAG
+ RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG
* ``--record-core-cycles``
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index be52e6f72dab..a922988607ef 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -90,20 +90,20 @@ int dpaa_intr_disable(char *if_name);
struct usdpaa_ioctl_link_status_args_old {
/* network device node name */
char if_name[IF_NAME_MAX_LEN];
- /* link status(ETH_LINK_UP/DOWN) */
+ /* link status(RTE_ETH_LINK_UP/DOWN) */
int link_status;
};
struct usdpaa_ioctl_link_status_args {
/* network device node name */
char if_name[IF_NAME_MAX_LEN];
- /* link status(ETH_LINK_UP/DOWN) */
+ /* link status(RTE_ETH_LINK_UP/DOWN) */
int link_status;
- /* link speed (ETH_SPEED_NUM_)*/
+ /* link speed (RTE_ETH_SPEED_NUM_)*/
int link_speed;
- /* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+ /* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
int link_duplex;
- /* link autoneg (ETH_LINK_AUTONEG/FIXED)*/
+ /* link autoneg (RTE_ETH_LINK_AUTONEG/FIXED)*/
int link_autoneg;
};
@@ -111,16 +111,16 @@ struct usdpaa_ioctl_link_status_args {
struct usdpaa_ioctl_update_link_status_args {
/* network device node name */
char if_name[IF_NAME_MAX_LEN];
- /* link status(ETH_LINK_UP/DOWN) */
+ /* link status(RTE_ETH_LINK_UP/DOWN) */
int link_status;
};
struct usdpaa_ioctl_update_link_speed {
/* network device node name*/
char if_name[IF_NAME_MAX_LEN];
- /* link speed (ETH_SPEED_NUM_)*/
+ /* link speed (RTE_ETH_SPEED_NUM_)*/
int link_speed;
- /* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+ /* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
int link_duplex;
};
diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h
index bab25fd72eee..360bf75d3861 100644
--- a/drivers/common/cnxk/roc_npc.h
+++ b/drivers/common/cnxk/roc_npc.h
@@ -153,7 +153,7 @@ enum roc_npc_rss_hash_function {
struct roc_npc_action_rss {
enum roc_npc_rss_hash_function func;
uint32_t level;
- uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
uint32_t key_len; /**< Hash key length in bytes. */
uint32_t queue_num; /**< Number of entries in @p queue. */
const uint8_t *key; /**< Hash key. */
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index b73b211fd249..fb5d549e6227 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -91,10 +91,10 @@ static const char *valid_arguments[] = {
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(af_packet_logtype, NOTICE);
@@ -265,7 +265,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
static int
eth_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -295,7 +295,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
internals->tx_queue[i].sockfd = -1;
}
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -316,8 +316,8 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_queues = (uint16_t)internals->nb_queues;
dev_info->max_tx_queues = (uint16_t)internals->nb_queues;
dev_info->min_rx_bufsize = 0;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
return 0;
}
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index 74ffa4511284..dbf745852716 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -163,10 +163,10 @@ static const char * const valid_arguments[] = {
};
static const struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_AUTONEG
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG
};
/* List which tracks PMDs to facilitate sharing UMEMs across them. */
@@ -654,7 +654,7 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
static int
eth_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -663,7 +663,7 @@ eth_dev_start(struct rte_eth_dev *dev)
static int
eth_dev_stop(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index 377299b14c7a..b618cba3f023 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -736,14 +736,14 @@ eth_ark_dev_info_get(struct rte_eth_dev *dev,
.nb_align = ARK_TX_MIN_QUEUE}; /* power of 2 */
/* ARK PMD supports all line rates, how do we indicate that here ?? */
- dev_info->speed_capa = (ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G);
-
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_TIMESTAMP;
+ dev_info->speed_capa = (RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G);
+
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
return 0;
}
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 0ce35eb519e2..5af1cff3770e 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -154,21 +154,21 @@ static struct rte_pci_driver rte_atl_pmd = {
.remove = eth_atl_pci_remove,
};
-#define ATL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP \
- | DEV_RX_OFFLOAD_IPV4_CKSUM \
- | DEV_RX_OFFLOAD_UDP_CKSUM \
- | DEV_RX_OFFLOAD_TCP_CKSUM \
- | DEV_RX_OFFLOAD_JUMBO_FRAME \
- | DEV_RX_OFFLOAD_MACSEC_STRIP \
- | DEV_RX_OFFLOAD_VLAN_FILTER)
-
-#define ATL_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT \
- | DEV_TX_OFFLOAD_IPV4_CKSUM \
- | DEV_TX_OFFLOAD_UDP_CKSUM \
- | DEV_TX_OFFLOAD_TCP_CKSUM \
- | DEV_TX_OFFLOAD_TCP_TSO \
- | DEV_TX_OFFLOAD_MACSEC_INSERT \
- | DEV_TX_OFFLOAD_MULTI_SEGS)
+#define ATL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP \
+ | RTE_ETH_RX_OFFLOAD_IPV4_CKSUM \
+ | RTE_ETH_RX_OFFLOAD_UDP_CKSUM \
+ | RTE_ETH_RX_OFFLOAD_TCP_CKSUM \
+ | RTE_ETH_RX_OFFLOAD_JUMBO_FRAME \
+ | RTE_ETH_RX_OFFLOAD_MACSEC_STRIP \
+ | RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+
+#define ATL_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT \
+ | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM \
+ | RTE_ETH_TX_OFFLOAD_UDP_CKSUM \
+ | RTE_ETH_TX_OFFLOAD_TCP_CKSUM \
+ | RTE_ETH_TX_OFFLOAD_TCP_TSO \
+ | RTE_ETH_TX_OFFLOAD_MACSEC_INSERT \
+ | RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define SFP_EEPROM_SIZE 0x100
@@ -489,7 +489,7 @@ atl_dev_start(struct rte_eth_dev *dev)
/* set adapter started */
hw->adapter_stopped = 0;
- if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_INIT_LOG(ERR,
"Invalid link_speeds for port %u, fix speed not supported",
dev->data->port_id);
@@ -656,18 +656,18 @@ atl_dev_set_link_up(struct rte_eth_dev *dev)
uint32_t link_speeds = dev->data->dev_conf.link_speeds;
uint32_t speed_mask = 0;
- if (link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
speed_mask = hw->aq_nic_cfg->link_speed_msk;
} else {
- if (link_speeds & ETH_LINK_SPEED_10G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
speed_mask |= AQ_NIC_RATE_10G;
- if (link_speeds & ETH_LINK_SPEED_5G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_5G)
speed_mask |= AQ_NIC_RATE_5G;
- if (link_speeds & ETH_LINK_SPEED_1G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
speed_mask |= AQ_NIC_RATE_1G;
- if (link_speeds & ETH_LINK_SPEED_2_5G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_2_5G)
speed_mask |= AQ_NIC_RATE_2G5;
- if (link_speeds & ETH_LINK_SPEED_100M)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100M)
speed_mask |= AQ_NIC_RATE_100M;
}
@@ -1128,10 +1128,10 @@ atl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->reta_size = HW_ATL_B0_RSS_REDIRECTION_MAX;
dev_info->flow_type_rss_offloads = ATL_RSS_OFFLOAD_ALL;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
- dev_info->speed_capa |= ETH_LINK_SPEED_100M;
- dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
- dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
return 0;
}
@@ -1176,10 +1176,10 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
u32 fc = AQ_NIC_FC_OFF;
int err = 0;
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
link.link_speed = 0;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = hw->is_autoneg ? ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = hw->is_autoneg ? RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
memset(&old, 0, sizeof(old));
/* load old link status */
@@ -1199,8 +1199,8 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
return 0;
}
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_speed = hw->aq_link_status.mbps;
rte_eth_linkstatus_set(dev, &link);
@@ -1334,7 +1334,7 @@ atl_dev_link_status_print(struct rte_eth_dev *dev)
PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -1533,13 +1533,13 @@ atl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
hw->aq_fw_ops->get_flow_control(hw, &fc);
if (fc == AQ_NIC_FC_OFF)
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
else if ((fc & AQ_NIC_FC_RX) && (fc & AQ_NIC_FC_TX))
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (fc & AQ_NIC_FC_RX)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (fc & AQ_NIC_FC_TX)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
return 0;
}
@@ -1554,13 +1554,13 @@ atl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
if (hw->aq_fw_ops->set_flow_control == NULL)
return -ENOTSUP;
- if (fc_conf->mode == RTE_FC_NONE)
+ if (fc_conf->mode == RTE_ETH_FC_NONE)
hw->aq_nic_cfg->flow_control = AQ_NIC_FC_OFF;
- else if (fc_conf->mode == RTE_FC_RX_PAUSE)
+ else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
hw->aq_nic_cfg->flow_control = AQ_NIC_FC_RX;
- else if (fc_conf->mode == RTE_FC_TX_PAUSE)
+ else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
hw->aq_nic_cfg->flow_control = AQ_NIC_FC_TX;
- else if (fc_conf->mode == RTE_FC_FULL)
+ else if (fc_conf->mode == RTE_ETH_FC_FULL)
hw->aq_nic_cfg->flow_control = (AQ_NIC_FC_RX | AQ_NIC_FC_TX);
if (old_flow_control != hw->aq_nic_cfg->flow_control)
@@ -1731,14 +1731,14 @@ atl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
PMD_INIT_FUNC_TRACE();
- ret = atl_enable_vlan_filter(dev, mask & ETH_VLAN_FILTER_MASK);
+ ret = atl_enable_vlan_filter(dev, mask & RTE_ETH_VLAN_FILTER_MASK);
- cfg->vlan_strip = !!(mask & ETH_VLAN_STRIP_MASK);
+ cfg->vlan_strip = !!(mask & RTE_ETH_VLAN_STRIP_MASK);
for (i = 0; i < dev->data->nb_rx_queues; i++)
hw_atl_rpo_rx_desc_vlan_stripping_set(hw, cfg->vlan_strip, i);
- if (mask & ETH_VLAN_EXTEND_MASK)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK)
ret = -ENOTSUP;
return ret;
@@ -1754,10 +1754,10 @@ atl_vlan_tpid_set(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
PMD_INIT_FUNC_TRACE();
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
+ case RTE_ETH_VLAN_TYPE_INNER:
hw_atl_rpf_vlan_inner_etht_set(hw, tpid);
break;
- case ETH_VLAN_TYPE_OUTER:
+ case RTE_ETH_VLAN_TYPE_OUTER:
hw_atl_rpf_vlan_outer_etht_set(hw, tpid);
break;
default:
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index f547571b5c97..da993be35faa 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -11,15 +11,15 @@
#include "hw_atl/hw_atl_utils.h"
#define ATL_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define ATL_DEV_PRIVATE_TO_HW(adapter) \
(&((struct atl_adapter *)adapter)->hw)
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 7d367c9306ec..ddf110d6ce7e 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -145,10 +145,10 @@ atl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
rxq->rx_free_thresh = rx_conf->rx_free_thresh;
rxq->l3_csum_enabled = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_IPV4_CKSUM;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
rxq->l4_csum_enabled = dev->data->dev_conf.rxmode.offloads &
- (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
PMD_DRV_LOG(ERR, "PMD does not support KEEP_CRC offload");
/* allocate memory for the software ring */
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff5b..e870ced7e992 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -2011,9 +2011,9 @@ avp_dev_configure(struct rte_eth_dev *eth_dev)
/* Setup required number of queues */
_avp_set_queue_counts(eth_dev);
- mask = (ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK);
+ mask = (RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK);
ret = avp_vlan_offload_set(eth_dev, mask);
if (ret < 0) {
PMD_DRV_LOG(ERR, "VLAN offload set failed by host, ret=%d\n",
@@ -2153,8 +2153,8 @@ avp_dev_link_update(struct rte_eth_dev *eth_dev,
struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct rte_eth_link *link = ð_dev->data->dev_link;
- link->link_speed = ETH_SPEED_NUM_10G;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link->link_status = !!(avp->flags & AVP_F_LINKUP);
return -1;
@@ -2204,8 +2204,8 @@ avp_dev_info_get(struct rte_eth_dev *eth_dev,
dev_info->max_rx_pktlen = avp->max_rx_pkt_len;
dev_info->max_mac_addrs = AVP_MAX_MAC_ADDRS;
if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
}
return 0;
@@ -2218,9 +2218,9 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
struct rte_eth_conf *dev_conf = ð_dev->data->dev_conf;
uint64_t offloads = dev_conf->rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
avp->features |= RTE_AVP_FEATURE_VLAN_OFFLOAD;
else
avp->features &= ~RTE_AVP_FEATURE_VLAN_OFFLOAD;
@@ -2229,13 +2229,13 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
}
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
PMD_DRV_LOG(ERR, "VLAN filter offload not supported\n");
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
PMD_DRV_LOG(ERR, "VLAN extend offload not supported\n");
}
diff --git a/drivers/net/axgbe/axgbe_dev.c b/drivers/net/axgbe/axgbe_dev.c
index 786288a7b079..c0f033e06b15 100644
--- a/drivers/net/axgbe/axgbe_dev.c
+++ b/drivers/net/axgbe/axgbe_dev.c
@@ -840,11 +840,11 @@ static void axgbe_rss_options(struct axgbe_port *pdata)
pdata->rss_hf = rss_conf->rss_hf;
rss_hf = rss_conf->rss_hf;
- if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+ if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
- if (rss_hf & (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
- if (rss_hf & (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
}
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 9cb4818af11f..f33b9245bcf9 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -326,7 +326,7 @@ axgbe_dev_configure(struct rte_eth_dev *dev)
struct axgbe_port *pdata = dev->data->dev_private;
/* Checksum offload to hardware */
pdata->rx_csum_enable = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_CHECKSUM;
+ RTE_ETH_RX_OFFLOAD_CHECKSUM;
return 0;
}
@@ -335,9 +335,9 @@ axgbe_dev_rx_mq_config(struct rte_eth_dev *dev)
{
struct axgbe_port *pdata = dev->data->dev_private;
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
pdata->rss_enable = 1;
- else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+ else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
pdata->rss_enable = 0;
else
return -1;
@@ -383,7 +383,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state);
rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
- if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
max_pkt_len > pdata->rx_buf_size)
dev_data->scattered_rx = 1;
@@ -519,8 +519,8 @@ axgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
continue;
pdata->rss_table[i] = reta_conf[idx].reta[shift];
@@ -550,8 +550,8 @@ axgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
continue;
reta_conf[idx].reta[shift] = pdata->rss_table[i];
@@ -588,13 +588,13 @@ axgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
pdata->rss_hf = rss_conf->rss_hf & AXGBE_RSS_OFFLOAD;
- if (pdata->rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+ if (pdata->rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
if (pdata->rss_hf &
- (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
if (pdata->rss_hf &
- (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
/* Set the RSS options */
@@ -763,7 +763,7 @@ axgbe_dev_link_update(struct rte_eth_dev *dev,
link.link_status = pdata->phy_link;
link.link_speed = pdata->phy_speed;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
ret = rte_eth_linkstatus_set(dev, &link);
if (ret == -1)
PMD_DRV_LOG(ERR, "No change in link status\n");
@@ -1206,25 +1206,25 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_pktlen = AXGBE_RX_MAX_BUF_SIZE;
dev_info->max_mac_addrs = pdata->hw_feat.addn_mac + 1;
dev_info->max_hash_mac_addrs = pdata->hw_feat.hash_table_size;
- dev_info->speed_capa = ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_KEEP_CRC;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
if (pdata->hw_feat.rss) {
dev_info->flow_type_rss_offloads = AXGBE_RSS_OFFLOAD;
@@ -1261,13 +1261,13 @@ axgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
fc.autoneg = pdata->pause_autoneg;
if (pdata->rx_pause && pdata->tx_pause)
- fc.mode = RTE_FC_FULL;
+ fc.mode = RTE_ETH_FC_FULL;
else if (pdata->rx_pause)
- fc.mode = RTE_FC_RX_PAUSE;
+ fc.mode = RTE_ETH_FC_RX_PAUSE;
else if (pdata->tx_pause)
- fc.mode = RTE_FC_TX_PAUSE;
+ fc.mode = RTE_ETH_FC_TX_PAUSE;
else
- fc.mode = RTE_FC_NONE;
+ fc.mode = RTE_ETH_FC_NONE;
fc_conf->high_water = (1024 + (fc.low_water[0] << 9)) / 1024;
fc_conf->low_water = (1024 + (fc.high_water[0] << 9)) / 1024;
@@ -1297,13 +1297,13 @@ axgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
AXGMAC_IOWRITE(pdata, reg, reg_val);
fc.mode = fc_conf->mode;
- if (fc.mode == RTE_FC_FULL) {
+ if (fc.mode == RTE_ETH_FC_FULL) {
pdata->tx_pause = 1;
pdata->rx_pause = 1;
- } else if (fc.mode == RTE_FC_RX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
pdata->tx_pause = 0;
pdata->rx_pause = 1;
- } else if (fc.mode == RTE_FC_TX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
pdata->tx_pause = 1;
pdata->rx_pause = 0;
} else {
@@ -1385,15 +1385,15 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
fc.mode = pfc_conf->fc.mode;
- if (fc.mode == RTE_FC_FULL) {
+ if (fc.mode == RTE_ETH_FC_FULL) {
pdata->tx_pause = 1;
pdata->rx_pause = 1;
AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
- } else if (fc.mode == RTE_FC_RX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
pdata->tx_pause = 0;
pdata->rx_pause = 1;
AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
- } else if (fc.mode == RTE_FC_TX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
pdata->tx_pause = 1;
pdata->rx_pause = 0;
AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 0);
@@ -1492,11 +1492,11 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
if (frame_size > AXGBE_ETH_MAX_LEN) {
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
val = 1;
} else {
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
val = 0;
}
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
@@ -1842,8 +1842,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
PMD_DRV_LOG(DEBUG, "EDVLP: qinq = 0x%x\n", qinq);
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
- PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_INNER\n");
+ case RTE_ETH_VLAN_TYPE_INNER:
+ PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_INNER\n");
if (qinq) {
if (tpid != 0x8100 && tpid != 0x88a8)
PMD_DRV_LOG(ERR,
@@ -1860,8 +1860,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
"Inner type not supported in single tag\n");
}
break;
- case ETH_VLAN_TYPE_OUTER:
- PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_OUTER\n");
+ case RTE_ETH_VLAN_TYPE_OUTER:
+ PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_OUTER\n");
if (qinq) {
PMD_DRV_LOG(DEBUG, "double tagging is enabled\n");
/*Enable outer VLAN tag*/
@@ -1878,11 +1878,11 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
"tag supported 0x8100/0x88A8\n");
}
break;
- case ETH_VLAN_TYPE_MAX:
- PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_MAX\n");
+ case RTE_ETH_VLAN_TYPE_MAX:
+ PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_MAX\n");
break;
- case ETH_VLAN_TYPE_UNKNOWN:
- PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_UNKNOWN\n");
+ case RTE_ETH_VLAN_TYPE_UNKNOWN:
+ PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_UNKNOWN\n");
break;
}
return 0;
@@ -1916,8 +1916,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, CSVL, 0);
AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, VLTI, 1);
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
PMD_DRV_LOG(DEBUG, "Strip ON for device = %s\n",
pdata->eth_dev->device->name);
pdata->hw_if.enable_rx_vlan_stripping(pdata);
@@ -1927,8 +1927,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
pdata->hw_if.disable_rx_vlan_stripping(pdata);
}
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
PMD_DRV_LOG(DEBUG, "Filter ON for device = %s\n",
pdata->eth_dev->device->name);
pdata->hw_if.enable_rx_vlan_filtering(pdata);
@@ -1938,14 +1938,14 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
pdata->hw_if.disable_rx_vlan_filtering(pdata);
}
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
PMD_DRV_LOG(DEBUG, "enabling vlan extended mode\n");
axgbe_vlan_extend_enable(pdata);
/* Set global registers with default ethertype*/
- axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+ axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
RTE_ETHER_TYPE_VLAN);
- axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+ axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
RTE_ETHER_TYPE_VLAN);
} else {
PMD_DRV_LOG(DEBUG, "disabling vlan extended mode\n");
diff --git a/drivers/net/axgbe/axgbe_ethdev.h b/drivers/net/axgbe/axgbe_ethdev.h
index a6226729fe4d..0a3e1c59df1a 100644
--- a/drivers/net/axgbe/axgbe_ethdev.h
+++ b/drivers/net/axgbe/axgbe_ethdev.h
@@ -97,12 +97,12 @@
/* Receive Side Scaling */
#define AXGBE_RSS_OFFLOAD ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define AXGBE_RSS_HASH_KEY_SIZE 40
#define AXGBE_RSS_MAX_TABLE_SIZE 256
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 4f98e695ae74..59fa9175aded 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -597,7 +597,7 @@ static void axgbe_an73_state_machine(struct axgbe_port *pdata)
pdata->an_int = 0;
axgbe_an73_clear_interrupts(pdata);
pdata->eth_dev->data->dev_link.link_status =
- ETH_LINK_DOWN;
+ RTE_ETH_LINK_DOWN;
} else if (pdata->an_state == AXGBE_AN_ERROR) {
PMD_DRV_LOG(ERR, "error during auto-negotiation, state=%u\n",
cur_state);
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index 33f709a6bb02..baa17a5fb43f 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -75,7 +75,7 @@ int axgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
(DMA_CH_INC * rxq->queue_id));
rxq->dma_tail_reg = (volatile uint32_t *)((uint8_t *)rxq->dma_regs +
DMA_CH_RDTR_LO);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -286,7 +286,7 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
mbuf->vlan_tci =
AXGMAC_GET_BITS_LE(desc->write.desc0,
RX_NORMAL_DESC0, OVT);
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
else
mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
@@ -430,7 +430,7 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
mbuf->vlan_tci =
AXGMAC_GET_BITS_LE(desc->write.desc0,
RX_NORMAL_DESC0, OVT);
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
else
mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 463886f17a58..14d91f868cd8 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -94,14 +94,14 @@ bnx2x_link_update(struct rte_eth_dev *dev)
link.link_speed = sc->link_vars.line_speed;
switch (sc->link_vars.duplex) {
case DUPLEX_FULL:
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case DUPLEX_HALF:
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
}
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
link.link_status = sc->link_vars.link_up;
return rte_eth_linkstatus_set(dev, &link);
@@ -181,7 +181,7 @@ bnx2x_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE(sc);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len;
dev->data->mtu = sc->mtu;
}
@@ -412,7 +412,7 @@ bnx2xvf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_comple
if (sc->old_bulletin.valid_bitmap & (1 << CHANNEL_DOWN)) {
PMD_DRV_LOG(ERR, sc, "PF indicated channel is down."
"VF device is no longer operational");
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
}
return ret;
@@ -538,8 +538,8 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_rx_bufsize = BNX2X_MIN_RX_BUF_SIZE;
dev_info->max_rx_pktlen = BNX2X_MAX_RX_PKT_LEN;
dev_info->max_mac_addrs = BNX2X_MAX_MAC_ADDRS;
- dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
@@ -675,7 +675,7 @@ bnx2x_common_dev_init(struct rte_eth_dev *eth_dev, int is_vf)
bnx2x_load_firmware(sc);
assert(sc->firmware);
- if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
sc->udp_rss = 1;
sc->rx_budget = BNX2X_RX_BUDGET;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 494a1eff3700..7e313c2fb5af 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -569,40 +569,40 @@ struct bnxt_rep_info {
#define BNXT_FW_STATUS_SHUTDOWN 0x100000
#define BNXT_ETH_RSS_SUPPORT ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_LEVEL_MASK)
-
-#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | \
- DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_KEEP_CRC | \
- DEV_RX_OFFLOAD_VLAN_EXTEND | \
- DEV_RX_OFFLOAD_TCP_LRO | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_LEVEL_MASK)
+
+#define BNXT_DEV_TX_OFFLOAD_SUPPORT (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define BNXT_DEV_RX_OFFLOAD_SUPPORT (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC | \
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+ RTE_ETH_RX_OFFLOAD_TCP_LRO | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define BNXT_HWRM_SHORT_REQ_LEN sizeof(struct hwrm_short_input)
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index de34a2f0bb2d..3f3596f39f2f 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -426,7 +426,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
goto err_out;
/* Alloc RSS context only if RSS mode is enabled */
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
int j, nr_ctxs = bnxt_rss_ctxts(bp);
/* RSS table size in Thor is 512.
@@ -458,7 +458,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
* setting is not available at this time, it will not be
* configured correctly in the CFA.
*/
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
vnic->vlan_strip = true;
else
vnic->vlan_strip = false;
@@ -493,7 +493,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
bnxt_hwrm_vnic_plcmode_cfg(bp, vnic);
rc = bnxt_hwrm_vnic_tpa_cfg(bp, vnic,
- (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) ?
+ (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) ?
true : false);
if (rc)
goto err_out;
@@ -738,11 +738,11 @@ static int bnxt_start_nic(struct bnxt *bp)
if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) {
bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
bp->flags |= BNXT_FLAG_JUMBO;
} else {
bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
bp->flags &= ~BNXT_FLAG_JUMBO;
}
@@ -908,35 +908,35 @@ uint32_t bnxt_get_speed_capabilities(struct bnxt *bp)
link_speed = bp->link_info->support_pam4_speeds;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB)
- speed_capa |= ETH_LINK_SPEED_100M;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100MBHD)
- speed_capa |= ETH_LINK_SPEED_100M_HD;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_1GB)
- speed_capa |= ETH_LINK_SPEED_1G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_2_5GB)
- speed_capa |= ETH_LINK_SPEED_2_5G;
+ speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_10GB)
- speed_capa |= ETH_LINK_SPEED_10G;
+ speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_20GB)
- speed_capa |= ETH_LINK_SPEED_20G;
+ speed_capa |= RTE_ETH_LINK_SPEED_20G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_25GB)
- speed_capa |= ETH_LINK_SPEED_25G;
+ speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_40GB)
- speed_capa |= ETH_LINK_SPEED_40G;
+ speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_50GB)
- speed_capa |= ETH_LINK_SPEED_50G;
+ speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100GB)
- speed_capa |= ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_50G)
- speed_capa |= ETH_LINK_SPEED_50G;
+ speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_100G)
- speed_capa |= ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_200G)
- speed_capa |= ETH_LINK_SPEED_200G;
+ speed_capa |= RTE_ETH_LINK_SPEED_200G;
if (bp->link_info->auto_mode ==
HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE)
- speed_capa |= ETH_LINK_SPEED_FIXED;
+ speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
return speed_capa;
}
@@ -980,8 +980,8 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT |
dev_info->tx_queue_offload_capa;
dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
@@ -1030,8 +1030,8 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
*/
/* VMDq resources */
- vpool = 64; /* ETH_64_POOLS */
- vrxq = 128; /* ETH_VMDQ_DCB_NUM_QUEUES */
+ vpool = 64; /* RTE_ETH_64_POOLS */
+ vrxq = 128; /* RTE_ETH_VMDQ_DCB_NUM_QUEUES */
for (i = 0; i < 4; vpool >>= 1, i++) {
if (max_vnics > vpool) {
for (j = 0; j < 5; vrxq >>= 1, j++) {
@@ -1126,18 +1126,18 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
(uint32_t)(eth_dev->data->nb_rx_queues) > bp->max_ring_grps)
goto resource_error;
- if (!(eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) &&
+ if (!(eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) &&
bp->max_vnics < eth_dev->data->nb_rx_queues)
goto resource_error;
bp->rx_cp_nr_rings = bp->rx_nr_rings;
bp->tx_cp_nr_rings = bp->tx_nr_rings;
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
eth_dev->data->mtu =
eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE *
@@ -1168,7 +1168,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
PMD_DRV_LOG(INFO, "Port %d Link Up - speed %u Mbps - %s\n",
eth_dev->data->port_id,
(uint32_t)link->link_speed,
- (link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ (link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
("full-duplex") : ("half-duplex\n"));
else
PMD_DRV_LOG(INFO, "Port %d Link Down\n",
@@ -1184,10 +1184,10 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
uint16_t buf_size;
int i;
- if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return 1;
- if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO)
+ if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
return 1;
for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1232,16 +1232,16 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
* a limited subset have been enabled.
*/
if (eth_dev->data->dev_conf.rxmode.offloads &
- ~(DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_VLAN_FILTER))
+ ~(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER))
goto use_scalar_rx;
#if defined(RTE_ARCH_X86) && defined(CC_AVX2_SUPPORT)
@@ -1293,7 +1293,7 @@ bnxt_transmit_function(struct rte_eth_dev *eth_dev)
* or tx offloads.
*/
if (eth_dev->data->scattered_rx ||
- (offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) ||
+ (offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) ||
BNXT_TRUFLOW_EN(bp))
goto use_scalar_tx;
@@ -1594,10 +1594,10 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
bnxt_link_update_op(eth_dev, 1);
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
- vlan_mask |= ETH_VLAN_FILTER_MASK;
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- vlan_mask |= ETH_VLAN_STRIP_MASK;
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+ vlan_mask |= RTE_ETH_VLAN_FILTER_MASK;
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ vlan_mask |= RTE_ETH_VLAN_STRIP_MASK;
rc = bnxt_vlan_offload_set_op(eth_dev, vlan_mask);
if (rc)
goto error;
@@ -1819,8 +1819,8 @@ int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete)
/* Retrieve link info from hardware */
rc = bnxt_get_hwrm_link_config(bp, &new);
if (rc) {
- new.link_speed = ETH_LINK_SPEED_100M;
- new.link_duplex = ETH_LINK_FULL_DUPLEX;
+ new.link_speed = RTE_ETH_LINK_SPEED_100M;
+ new.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR,
"Failed to retrieve link rc = 0x%x!\n", rc);
goto out;
@@ -2014,7 +2014,7 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
if (!vnic->rss_table)
return -EINVAL;
- if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+ if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
return -EINVAL;
if (reta_size != tbl_size) {
@@ -2027,8 +2027,8 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
for (i = 0; i < reta_size; i++) {
struct bnxt_rx_queue *rxq;
- idx = i / RTE_RETA_GROUP_SIZE;
- sft = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ sft = i % RTE_ETH_RETA_GROUP_SIZE;
if (!(reta_conf[idx].mask & (1ULL << sft)))
continue;
@@ -2081,8 +2081,8 @@ static int bnxt_reta_query_op(struct rte_eth_dev *eth_dev,
}
for (idx = 0, i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- sft = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ sft = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << sft)) {
uint16_t qid;
@@ -2120,7 +2120,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
* If RSS enablement were different than dev_configure,
* then return -EINVAL
*/
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
if (!rss_conf->rss_hf)
PMD_DRV_LOG(ERR, "Hash type NONE\n");
} else {
@@ -2138,7 +2138,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
vnic->hash_type = bnxt_rte_to_hwrm_hash_types(rss_conf->rss_hf);
vnic->hash_mode =
bnxt_rte_to_hwrm_hash_level(bp, rss_conf->rss_hf,
- ETH_RSS_LEVEL(rss_conf->rss_hf));
+ RTE_ETH_RSS_LEVEL(rss_conf->rss_hf));
/*
* If hashkey is not specified, use the previously configured
@@ -2183,30 +2183,30 @@ static int bnxt_rss_hash_conf_get_op(struct rte_eth_dev *eth_dev,
hash_types = vnic->hash_type;
rss_conf->rss_hf = 0;
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4) {
- rss_conf->rss_hf |= ETH_RSS_IPV4;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6) {
- rss_conf->rss_hf |= ETH_RSS_IPV6;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
}
@@ -2246,17 +2246,17 @@ static int bnxt_flow_ctrl_get_op(struct rte_eth_dev *dev,
fc_conf->autoneg = 1;
switch (bp->link_info->pause) {
case 0:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case (HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX |
HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX):
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
}
return 0;
@@ -2279,11 +2279,11 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
bp->link_info->auto_pause = 0;
bp->link_info->force_pause = 0;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
if (fc_conf->autoneg) {
bp->link_info->auto_pause =
HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_RX;
@@ -2294,7 +2294,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_RX;
}
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
if (fc_conf->autoneg) {
bp->link_info->auto_pause =
HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX;
@@ -2305,7 +2305,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_TX;
}
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
if (fc_conf->autoneg) {
bp->link_info->auto_pause =
HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX |
@@ -2336,7 +2336,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
return rc;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (bp->vxlan_port_cnt) {
PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
udp_tunnel->udp_port);
@@ -2351,7 +2351,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN;
bp->vxlan_port_cnt++;
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (bp->geneve_port_cnt) {
PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
udp_tunnel->udp_port);
@@ -2389,7 +2389,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
return rc;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (!bp->vxlan_port_cnt) {
PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
return -EINVAL;
@@ -2406,7 +2406,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_VXLAN;
port = bp->vxlan_fw_dst_port_id;
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (!bp->geneve_port_cnt) {
PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
return -EINVAL;
@@ -2584,7 +2584,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
int rc;
vnic = BNXT_GET_DEFAULT_VNIC(bp);
- if (!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)) {
+ if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
/* Remove any VLAN filters programmed */
for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
bnxt_del_vlan_filter(bp, i);
@@ -2604,7 +2604,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
bnxt_add_vlan_filter(bp, 0);
}
PMD_DRV_LOG(DEBUG, "VLAN Filtering: %d\n",
- !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER));
+ !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER));
return 0;
}
@@ -2617,7 +2617,7 @@ static int bnxt_free_one_vnic(struct bnxt *bp, uint16_t vnic_id)
/* Destroy vnic filters and vnic */
if (bp->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) {
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
bnxt_del_vlan_filter(bp, i);
}
@@ -2656,7 +2656,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
return rc;
if (bp->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) {
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
rc = bnxt_add_vlan_filter(bp, 0);
if (rc)
return rc;
@@ -2674,7 +2674,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
return rc;
PMD_DRV_LOG(DEBUG, "VLAN Strip Offload: %d\n",
- !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP));
+ !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP));
return rc;
}
@@ -2694,22 +2694,22 @@ bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask)
if (!dev->data->dev_started)
return 0;
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* Enable or disable VLAN filtering */
rc = bnxt_config_vlan_hw_filter(bp, rx_offloads);
if (rc)
return rc;
}
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
rc = bnxt_config_vlan_hw_stripping(bp, rx_offloads);
if (rc)
return rc;
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
PMD_DRV_LOG(DEBUG, "Extend VLAN supported\n");
else
PMD_DRV_LOG(INFO, "Extend VLAN unsupported\n");
@@ -2724,10 +2724,10 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
{
struct bnxt *bp = dev->data->dev_private;
int qinq = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_EXTEND;
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
- if (vlan_type != ETH_VLAN_TYPE_INNER &&
- vlan_type != ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+ vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
PMD_DRV_LOG(ERR,
"Unsupported vlan type.");
return -EINVAL;
@@ -2739,7 +2739,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
return -EINVAL;
}
- if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
switch (tpid) {
case RTE_ETHER_TYPE_QINQ:
bp->outer_tpid_bd =
@@ -2767,7 +2767,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
}
bp->outer_tpid_bd |= tpid;
PMD_DRV_LOG(INFO, "outer_tpid_bd = %x\n", bp->outer_tpid_bd);
- } else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+ } else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
PMD_DRV_LOG(ERR,
"Can accelerate only outer vlan in QinQ\n");
return -EINVAL;
@@ -2807,7 +2807,7 @@ bnxt_set_default_mac_addr_op(struct rte_eth_dev *dev,
bnxt_del_dflt_mac_filter(bp, vnic);
memcpy(bp->mac_addr, addr, RTE_ETHER_ADDR_LEN);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
/* This filter will allow only untagged packets */
rc = bnxt_add_vlan_filter(bp, 0);
} else {
@@ -3029,10 +3029,10 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
if (new_mtu > RTE_ETHER_MTU) {
bp->flags |= BNXT_FLAG_JUMBO;
bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
} else {
bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
bp->flags &= ~BNXT_FLAG_JUMBO;
}
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 59489b591a6f..98e1107f629c 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -974,7 +974,7 @@ static int bnxt_vnic_prep(struct bnxt *bp, struct bnxt_vnic_info *vnic,
}
}
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
vnic->vlan_strip = true;
else
vnic->vlan_strip = false;
@@ -1157,7 +1157,7 @@ bnxt_validate_and_parse_flow(struct rte_eth_dev *dev,
rxq = bp->rx_queues[act_q->index];
- if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) && rxq &&
+ if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) && rxq &&
vnic->fw_vnic_id != INVALID_HW_RING_ID)
goto use_vnic;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index f29d57423585..0d9dda0c362c 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -628,7 +628,7 @@ int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
uint16_t j = dst_id - 1;
//TODO: Is there a better way to add VLANs to each VNIC in case of VMDQ
- if ((dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) &&
+ if ((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) &&
conf->pool_map[j].pools & (1UL << j)) {
PMD_DRV_LOG(DEBUG,
"Add vlan %u to vmdq pool %u\n",
@@ -2955,12 +2955,12 @@ static uint16_t bnxt_parse_eth_link_duplex(uint32_t conf_link_speed)
{
uint8_t hw_link_duplex = HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
- if ((conf_link_speed & ETH_LINK_SPEED_FIXED) == ETH_LINK_SPEED_AUTONEG)
+ if ((conf_link_speed & RTE_ETH_LINK_SPEED_FIXED) == RTE_ETH_LINK_SPEED_AUTONEG)
return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
switch (conf_link_speed) {
- case ETH_LINK_SPEED_10M_HD:
- case ETH_LINK_SPEED_100M_HD:
+ case RTE_ETH_LINK_SPEED_10M_HD:
+ case RTE_ETH_LINK_SPEED_100M_HD:
/* FALLTHROUGH */
return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF;
}
@@ -2977,51 +2977,51 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
{
uint16_t eth_link_speed = 0;
- if (conf_link_speed == ETH_LINK_SPEED_AUTONEG)
- return ETH_LINK_SPEED_AUTONEG;
+ if (conf_link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
+ return RTE_ETH_LINK_SPEED_AUTONEG;
- switch (conf_link_speed & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_100M:
- case ETH_LINK_SPEED_100M_HD:
+ switch (conf_link_speed & ~RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_100M:
+ case RTE_ETH_LINK_SPEED_100M_HD:
/* FALLTHROUGH */
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_100MB;
break;
- case ETH_LINK_SPEED_1G:
+ case RTE_ETH_LINK_SPEED_1G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_1GB;
break;
- case ETH_LINK_SPEED_2_5G:
+ case RTE_ETH_LINK_SPEED_2_5G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_2_5GB;
break;
- case ETH_LINK_SPEED_10G:
+ case RTE_ETH_LINK_SPEED_10G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_10GB;
break;
- case ETH_LINK_SPEED_20G:
+ case RTE_ETH_LINK_SPEED_20G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_20GB;
break;
- case ETH_LINK_SPEED_25G:
+ case RTE_ETH_LINK_SPEED_25G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_25GB;
break;
- case ETH_LINK_SPEED_40G:
+ case RTE_ETH_LINK_SPEED_40G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_40GB;
break;
- case ETH_LINK_SPEED_50G:
+ case RTE_ETH_LINK_SPEED_50G:
eth_link_speed = pam4_link ?
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_50GB :
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_50GB;
break;
- case ETH_LINK_SPEED_100G:
+ case RTE_ETH_LINK_SPEED_100G:
eth_link_speed = pam4_link ?
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_100GB :
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_100GB;
break;
- case ETH_LINK_SPEED_200G:
+ case RTE_ETH_LINK_SPEED_200G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
break;
@@ -3034,11 +3034,11 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
return eth_link_speed;
}
-#define BNXT_SUPPORTED_SPEEDS (ETH_LINK_SPEED_100M | ETH_LINK_SPEED_100M_HD | \
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G | \
- ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G | ETH_LINK_SPEED_25G | \
- ETH_LINK_SPEED_40G | ETH_LINK_SPEED_50G | \
- ETH_LINK_SPEED_100G | ETH_LINK_SPEED_200G)
+#define BNXT_SUPPORTED_SPEEDS (RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_100M_HD | \
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G | \
+ RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G | RTE_ETH_LINK_SPEED_25G | \
+ RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_50G | \
+ RTE_ETH_LINK_SPEED_100G | RTE_ETH_LINK_SPEED_200G)
static int bnxt_validate_link_speed(struct bnxt *bp)
{
@@ -3047,13 +3047,13 @@ static int bnxt_validate_link_speed(struct bnxt *bp)
uint32_t link_speed_capa;
uint32_t one_speed;
- if (link_speed == ETH_LINK_SPEED_AUTONEG)
+ if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
return 0;
link_speed_capa = bnxt_get_speed_capabilities(bp);
- if (link_speed & ETH_LINK_SPEED_FIXED) {
- one_speed = link_speed & ~ETH_LINK_SPEED_FIXED;
+ if (link_speed & RTE_ETH_LINK_SPEED_FIXED) {
+ one_speed = link_speed & ~RTE_ETH_LINK_SPEED_FIXED;
if (one_speed & (one_speed - 1)) {
PMD_DRV_LOG(ERR,
@@ -3083,71 +3083,71 @@ bnxt_parse_eth_link_speed_mask(struct bnxt *bp, uint32_t link_speed)
{
uint16_t ret = 0;
- if (link_speed == ETH_LINK_SPEED_AUTONEG) {
+ if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG) {
if (bp->link_info->support_speeds)
return bp->link_info->support_speeds;
link_speed = BNXT_SUPPORTED_SPEEDS;
}
- if (link_speed & ETH_LINK_SPEED_100M)
+ if (link_speed & RTE_ETH_LINK_SPEED_100M)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
- if (link_speed & ETH_LINK_SPEED_100M_HD)
+ if (link_speed & RTE_ETH_LINK_SPEED_100M_HD)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
- if (link_speed & ETH_LINK_SPEED_1G)
+ if (link_speed & RTE_ETH_LINK_SPEED_1G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_1GB;
- if (link_speed & ETH_LINK_SPEED_2_5G)
+ if (link_speed & RTE_ETH_LINK_SPEED_2_5G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_2_5GB;
- if (link_speed & ETH_LINK_SPEED_10G)
+ if (link_speed & RTE_ETH_LINK_SPEED_10G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_10GB;
- if (link_speed & ETH_LINK_SPEED_20G)
+ if (link_speed & RTE_ETH_LINK_SPEED_20G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_20GB;
- if (link_speed & ETH_LINK_SPEED_25G)
+ if (link_speed & RTE_ETH_LINK_SPEED_25G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_25GB;
- if (link_speed & ETH_LINK_SPEED_40G)
+ if (link_speed & RTE_ETH_LINK_SPEED_40G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_40GB;
- if (link_speed & ETH_LINK_SPEED_50G)
+ if (link_speed & RTE_ETH_LINK_SPEED_50G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_50GB;
- if (link_speed & ETH_LINK_SPEED_100G)
+ if (link_speed & RTE_ETH_LINK_SPEED_100G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100GB;
- if (link_speed & ETH_LINK_SPEED_200G)
+ if (link_speed & RTE_ETH_LINK_SPEED_200G)
ret |= HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
return ret;
}
static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
{
- uint32_t eth_link_speed = ETH_SPEED_NUM_NONE;
+ uint32_t eth_link_speed = RTE_ETH_SPEED_NUM_NONE;
switch (hw_link_speed) {
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB:
- eth_link_speed = ETH_SPEED_NUM_100M;
+ eth_link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_1GB:
- eth_link_speed = ETH_SPEED_NUM_1G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2_5GB:
- eth_link_speed = ETH_SPEED_NUM_2_5G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_10GB:
- eth_link_speed = ETH_SPEED_NUM_10G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_20GB:
- eth_link_speed = ETH_SPEED_NUM_20G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_25GB:
- eth_link_speed = ETH_SPEED_NUM_25G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_40GB:
- eth_link_speed = ETH_SPEED_NUM_40G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_50GB:
- eth_link_speed = ETH_SPEED_NUM_50G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100GB:
- eth_link_speed = ETH_SPEED_NUM_100G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_100G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_200GB:
- eth_link_speed = ETH_SPEED_NUM_200G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_200G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2GB:
default:
@@ -3160,16 +3160,16 @@ static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
static uint16_t bnxt_parse_hw_link_duplex(uint16_t hw_link_duplex)
{
- uint16_t eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+ uint16_t eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (hw_link_duplex) {
case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH:
case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_FULL:
/* FALLTHROUGH */
- eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+ eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF:
- eth_link_duplex = ETH_LINK_HALF_DUPLEX;
+ eth_link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
default:
PMD_DRV_LOG(ERR, "HWRM link duplex %d not defined\n",
@@ -3198,12 +3198,12 @@ int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link)
link->link_speed =
bnxt_parse_hw_link_speed(link_info->link_speed);
else
- link->link_speed = ETH_SPEED_NUM_NONE;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
link->link_duplex = bnxt_parse_hw_link_duplex(link_info->duplex);
link->link_status = link_info->link_up;
link->link_autoneg = link_info->auto_mode ==
HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE ?
- ETH_LINK_FIXED : ETH_LINK_AUTONEG;
+ RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
exit:
return rc;
}
@@ -3229,7 +3229,7 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up)
autoneg = bnxt_check_eth_link_autoneg(dev_conf->link_speeds);
if (BNXT_CHIP_P5(bp) &&
- dev_conf->link_speeds == ETH_LINK_SPEED_40G) {
+ dev_conf->link_speeds == RTE_ETH_LINK_SPEED_40G) {
/* 40G is not supported as part of media auto detect.
* The speed should be forced and autoneg disabled
* to configure 40G speed.
@@ -3320,7 +3320,7 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
HWRM_CHECK_RESULT();
- bp->vlan = rte_le_to_cpu_16(resp->vlan) & ETH_VLAN_ID_MAX;
+ bp->vlan = rte_le_to_cpu_16(resp->vlan) & RTE_ETH_VLAN_ID_MAX;
svif_info = rte_le_to_cpu_16(resp->svif_info);
if (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index bdbad53b7d7f..a9f5e13476b0 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -536,7 +536,7 @@ int bnxt_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
if (parent_bp->flags & BNXT_FLAG_PTP_SUPPORTED)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT;
dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index 957b175f1b89..632a611bf612 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -185,7 +185,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
int tpa_info_start = ag_bitmap_start + ag_bitmap_len;
int tpa_info_len = 0;
- if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if (rx_ring_info && (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
int tpa_max = BNXT_TPA_MAX_AGGS(bp);
tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info);
@@ -278,7 +278,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
ag_bitmap_start, ag_bitmap_len);
/* TPA info */
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
rx_ring_info->tpa_info =
((struct bnxt_tpa_info *)((char *)mz->addr +
tpa_info_start));
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index bbcb3b06e7df..0ac3a2b3b7d3 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -41,13 +41,13 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
bp->nr_vnics = 0;
/* Multi-queue mode */
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB_RSS) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
/* VMDq ONLY, VMDq+RSS, VMDq+DCB, VMDq+DCB+RSS */
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_RSS:
- case ETH_MQ_RX_VMDQ_ONLY:
- case ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
/* FALLTHROUGH */
/* ETH_8/64_POOLs */
pools = conf->nb_queue_pools;
@@ -55,14 +55,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
max_pools = RTE_MIN(bp->max_vnics,
RTE_MIN(bp->max_l2_ctx,
RTE_MIN(bp->max_rsscos_ctx,
- ETH_64_POOLS)));
+ RTE_ETH_64_POOLS)));
PMD_DRV_LOG(DEBUG,
"pools = %u max_pools = %u\n",
pools, max_pools);
if (pools > max_pools)
pools = max_pools;
break;
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
pools = bp->rx_cosq_cnt ? bp->rx_cosq_cnt : 1;
break;
default:
@@ -100,7 +100,7 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
ring_idx, rxq, i, vnic);
}
if (i == 0) {
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB) {
bp->eth_dev->data->promiscuous = 1;
vnic->flags |= BNXT_VNIC_INFO_PROMISC;
}
@@ -110,8 +110,8 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
vnic->end_grp_id = end_grp_id;
if (i) {
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB ||
- !(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS))
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB ||
+ !(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS))
vnic->rss_dflt_cr = true;
goto skip_filter_allocation;
}
@@ -136,14 +136,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
bp->rx_num_qs_per_vnic = nb_q_per_grp;
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
struct rte_eth_rss_conf *rss = &dev_conf->rx_adv_conf.rss_conf;
if (bp->flags & BNXT_FLAG_UPDATE_HASH)
bp->flags &= ~BNXT_FLAG_UPDATE_HASH;
for (i = 0; i < bp->nr_vnics; i++) {
- uint32_t lvl = ETH_RSS_LEVEL(rss->rss_hf);
+ uint32_t lvl = RTE_ETH_RSS_LEVEL(rss->rss_hf);
vnic = &bp->vnic_info[i];
vnic->hash_type =
@@ -338,7 +338,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
PMD_DRV_LOG(DEBUG, "RX Buf size is %d\n", rxq->rx_buf_size);
rxq->queue_id = queue_idx;
rxq->port_id = eth_dev->data->port_id;
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -454,7 +454,7 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
}
PMD_DRV_LOG(INFO, "Rx queue started %d\n", rx_queue_id);
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
vnic = rxq->vnic;
if (BNXT_HAS_RING_GRPS(bp)) {
@@ -525,7 +525,7 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
rxq->rx_started = false;
PMD_DRV_LOG(DEBUG, "Rx queue stopped\n");
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
if (BNXT_HAS_RING_GRPS(bp))
vnic->fw_grp_ids[rx_queue_id] = INVALID_HW_RING_ID;
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 73fbdd17d126..0909bab89b76 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -566,8 +566,8 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
dev_conf = &rxq->bp->eth_dev->data->dev_conf;
offloads = dev_conf->rxmode.offloads;
- outer_cksum_enabled = !!(offloads & (DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM));
+ outer_cksum_enabled = !!(offloads & (RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM));
/* Initialize ol_flags table. */
pt = rxr->ol_flags_table;
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
index d08854ff61e2..e4905b4fd169 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
@@ -416,7 +416,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_common.h b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
index 9b9489a695a2..0627fd212d0a 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_common.h
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
@@ -96,7 +96,7 @@ bnxt_rxq_rearm(struct bnxt_rx_queue *rxq, struct bnxt_rx_ring_info *rxr)
}
/*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
* is enabled.
*/
static inline void
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
index 13211060cf0e..f15e2d3b4ed4 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
@@ -352,7 +352,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
index 6e563053260a..ffd560166cac 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
@@ -333,7 +333,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 47824334ae3e..401dd83f4e7d 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -350,7 +350,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
}
/*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
* is enabled.
*/
static void bnxt_tx_cmp_fast(struct bnxt_tx_queue *txq, int nr_pkts)
@@ -476,7 +476,7 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index 26253a7e17f2..c63cf4b943fa 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -239,17 +239,17 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
{
uint16_t hwrm_type = 0;
- if (rte_type & ETH_RSS_IPV4)
+ if (rte_type & RTE_ETH_RSS_IPV4)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
- if (rte_type & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
- if (rte_type & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
- if (rte_type & ETH_RSS_IPV6)
+ if (rte_type & RTE_ETH_RSS_IPV6)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
- if (rte_type & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
- if (rte_type & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
return hwrm_type;
@@ -258,11 +258,11 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
int bnxt_rte_to_hwrm_hash_level(struct bnxt *bp, uint64_t hash_f, uint32_t lvl)
{
uint32_t mode = HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_DEFAULT;
- bool l3 = (hash_f & (ETH_RSS_IPV4 | ETH_RSS_IPV6));
- bool l4 = (hash_f & (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP));
+ bool l3 = (hash_f & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6));
+ bool l4 = (hash_f & (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP));
bool l3_only = l3 && !l4;
bool l3_and_l4 = l3 && l4;
@@ -307,16 +307,16 @@ uint64_t bnxt_hwrm_to_rte_rss_level(struct bnxt *bp, uint32_t mode)
* return default hash mode.
*/
if (!(bp->vnic_cap_flags & BNXT_VNIC_CAP_OUTER_RSS))
- return ETH_RSS_LEVEL_PMD_DEFAULT;
+ return RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_2 ||
mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_4)
- rss_level |= ETH_RSS_LEVEL_OUTERMOST;
+ rss_level |= RTE_ETH_RSS_LEVEL_OUTERMOST;
else if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_2 ||
mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_4)
- rss_level |= ETH_RSS_LEVEL_INNERMOST;
+ rss_level |= RTE_ETH_RSS_LEVEL_INNERMOST;
else
- rss_level |= ETH_RSS_LEVEL_PMD_DEFAULT;
+ rss_level |= RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
return rss_level;
}
diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c
index f71543810970..77ecbef04c3d 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt.c
+++ b/drivers/net/bnxt/rte_pmd_bnxt.c
@@ -421,18 +421,18 @@ int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf,
if (vf >= bp->pdev->max_vfs)
return -EINVAL;
- if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG) {
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG) {
PMD_DRV_LOG(ERR, "Currently cannot toggle this setting\n");
return -ENOTSUP;
}
/* Is this really the correct mapping? VFd seems to think it is. */
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
flag |= BNXT_VNIC_INFO_PROMISC;
- if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
flag |= BNXT_VNIC_INFO_BCAST;
- if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
flag |= BNXT_VNIC_INFO_ALLMULTI | BNXT_VNIC_INFO_MCAST;
if (on)
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index fc179a2732ac..8b104b639184 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -167,8 +167,8 @@ struct bond_dev_private {
struct rte_eth_desc_lim tx_desc_lim; /**< Tx descriptor limits */
uint16_t reta_size;
- struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_512 /
- RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_512 /
+ RTE_ETH_RETA_GROUP_SIZE];
uint8_t rss_key[52]; /**< 52-byte hash key buffer. */
uint8_t rss_key_len; /**< hash key length in bytes. */
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 128754f4595a..20adfcf0ea9c 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -770,25 +770,25 @@ link_speed_key(uint16_t speed) {
uint16_t key_speed;
switch (speed) {
- case ETH_SPEED_NUM_NONE:
+ case RTE_ETH_SPEED_NUM_NONE:
key_speed = 0x00;
break;
- case ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_10M:
key_speed = BOND_LINK_SPEED_KEY_10M;
break;
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
key_speed = BOND_LINK_SPEED_KEY_100M;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
key_speed = BOND_LINK_SPEED_KEY_1000M;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
key_speed = BOND_LINK_SPEED_KEY_10G;
break;
- case ETH_SPEED_NUM_20G:
+ case RTE_ETH_SPEED_NUM_20G:
key_speed = BOND_LINK_SPEED_KEY_20G;
break;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
key_speed = BOND_LINK_SPEED_KEY_40G;
break;
default:
@@ -866,7 +866,7 @@ bond_mode_8023ad_periodic_cb(void *arg)
if (ret >= 0 && link_info.link_status != 0) {
key = link_speed_key(link_info.link_speed) << 1;
- if (link_info.link_duplex == ETH_LINK_FULL_DUPLEX)
+ if (link_info.link_duplex == RTE_ETH_LINK_FULL_DUPLEX)
key |= BOND_LINK_FULL_DUPLEX_KEY;
} else {
key = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index eb8d15d16034..a6fe0304c648 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -204,7 +204,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
if ((bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) == 0)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) == 0)
return 0;
internals = bonded_eth_dev->data->dev_private;
@@ -586,7 +586,7 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
return -1;
}
- if (link_props.link_status == ETH_LINK_UP) {
+ if (link_props.link_status == RTE_ETH_LINK_UP) {
if (internals->active_slave_count == 0 &&
!internals->user_defined_primary_port)
bond_ethdev_primary_set(internals,
@@ -721,7 +721,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
internals->tx_offload_capa = 0;
internals->rx_queue_offload_capa = 0;
internals->tx_queue_offload_capa = 0;
- internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+ internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
internals->reta_size = 0;
internals->candidate_max_rx_pktlen = 0;
internals->max_rx_pktlen = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index a6755661c49c..a2903366a3f6 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1373,8 +1373,8 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
* In any other mode the link properties are set to default
* values of AUTONEG/DUPLEX
*/
- ethdev->data->dev_link.link_autoneg = ETH_LINK_AUTONEG;
- ethdev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ ethdev->data->dev_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
+ ethdev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
}
}
@@ -1704,7 +1704,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
/* If RSS is enabled for bonding, try to enable it for slaves */
- if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
if (internals->rss_key_len != 0) {
slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
internals->rss_key_len;
@@ -1721,23 +1721,23 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
}
if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
slave_eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
else
slave_eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
bonded_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
slave_eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
slave_eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
nb_tx_queues = bonded_eth_dev->data->nb_tx_queues;
@@ -1838,7 +1838,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
}
/* If RSS is enabled for bonding, synchronize RETA */
- if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+ if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
int i;
struct bond_dev_private *internals;
@@ -1961,7 +1961,7 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
return -1;
}
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 1;
internals = eth_dev->data->dev_private;
@@ -2101,7 +2101,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
tlb_last_obytets[internals->active_slaves[i]] = 0;
}
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 0;
internals->link_status_polling_enabled = 0;
@@ -2423,15 +2423,15 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
bond_ctx = ethdev->data->dev_private;
- ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
if (ethdev->data->dev_started == 0 ||
bond_ctx->active_slave_count == 0) {
- ethdev->data->dev_link.link_status = ETH_LINK_DOWN;
+ ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
- ethdev->data->dev_link.link_status = ETH_LINK_UP;
+ ethdev->data->dev_link.link_status = RTE_ETH_LINK_UP;
if (wait_to_complete)
link_update = rte_eth_link_get;
@@ -2456,7 +2456,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
&slave_link);
if (ret < 0) {
ethdev->data->dev_link.link_speed =
- ETH_SPEED_NUM_NONE;
+ RTE_ETH_SPEED_NUM_NONE;
RTE_BOND_LOG(ERR,
"Slave (port %u) link get failed: %s",
bond_ctx->active_slaves[idx],
@@ -2498,7 +2498,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
* In theses mode the maximum theoretical link speed is the sum
* of all the slaves
*/
- ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
one_link_update_succeeded = false;
for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
@@ -2872,7 +2872,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
goto link_update;
/* check link state properties if bonded link is up*/
- if (bonded_eth_dev->data->dev_link.link_status == ETH_LINK_UP) {
+ if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
if (link_properties_valid(bonded_eth_dev, &link) != 0)
RTE_BOND_LOG(ERR, "Invalid link properties "
"for slave %d in bonding mode %d",
@@ -2888,7 +2888,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
if (internals->active_slave_count < 1) {
/* If first active slave, then change link status */
bonded_eth_dev->data->dev_link.link_status =
- ETH_LINK_UP;
+ RTE_ETH_LINK_UP;
internals->current_primary_port = port_id;
lsc_flag = 1;
@@ -2980,12 +2980,12 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
/* Copy RETA table */
- reta_count = (reta_size + RTE_RETA_GROUP_SIZE - 1) /
- RTE_RETA_GROUP_SIZE;
+ reta_count = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) /
+ RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < reta_count; i++) {
internals->reta_conf[i].mask = reta_conf[i].mask;
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
internals->reta_conf[i].reta[j] = reta_conf[i].reta[j];
}
@@ -3018,8 +3018,8 @@ bond_ethdev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
/* Copy RETA table */
- for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++)
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = internals->reta_conf[i].reta[j];
@@ -3279,7 +3279,7 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
internals->max_rx_pktlen = 0;
/* Initially allow to choose any offload type */
- internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+ internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
memset(&internals->default_rxconf, 0,
sizeof(internals->default_rxconf));
@@ -3508,7 +3508,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
* set key to the the value specified in port RSS configuration.
* Fall back to default RSS key if the key is not specified
*/
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
if (dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key != NULL) {
internals->rss_key_len =
dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len;
@@ -3523,9 +3523,9 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
for (i = 0; i < RTE_DIM(internals->reta_conf); i++) {
internals->reta_conf[i].mask = ~0LL;
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
internals->reta_conf[i].reta[j] =
- (i * RTE_RETA_GROUP_SIZE + j) %
+ (i * RTE_ETH_RETA_GROUP_SIZE + j) %
dev->data->nb_rx_queues;
}
}
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 7caec6cf14c8..9a09748673b2 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -15,22 +15,22 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct rte_eth_rxmode *rxmode = &conf->rxmode;
uint16_t flags = 0;
- if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
- (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+ (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
flags |= NIX_RX_OFFLOAD_RSS_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
if (!dev->ptype_disable)
@@ -69,36 +69,36 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
- if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
- conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+ conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
- if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
- conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
- if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= NIX_TX_MULTI_SEG_F;
/* Enable Inner checksum for TSO */
- if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+ if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
/* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_TX_OFFLOAD_TSTAMP_F;
return flags;
diff --git a/drivers/net/cnxk/cn10k_rx.c b/drivers/net/cnxk/cn10k_rx.c
index 69e767ac3dd6..e3b1bd8ad225 100644
--- a/drivers/net/cnxk/cn10k_rx.c
+++ b/drivers/net/cnxk/cn10k_rx.c
@@ -76,12 +76,12 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
nix_eth_rx_burst_mseg[0][0][0][0][0][0];
if (dev->scalar_ena) {
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_burst);
}
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
}
diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c
index 0e1276c60ba2..f63b8fabefd4 100644
--- a/drivers/net/cnxk/cn10k_tx.c
+++ b/drivers/net/cnxk/cn10k_tx.c
@@ -77,11 +77,11 @@ cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
if (dev->scalar_ena) {
pick_tx_func(eth_dev, nix_eth_tx_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
} else {
pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
}
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 115e678916bb..9ff2d3dc114a 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -15,22 +15,22 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct rte_eth_rxmode *rxmode = &conf->rxmode;
uint16_t flags = 0;
- if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
- (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+ (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
flags |= NIX_RX_OFFLOAD_RSS_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
if (!dev->ptype_disable)
@@ -69,36 +69,36 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
- if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
- conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+ conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
- if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
- conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
- if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= NIX_TX_MULTI_SEG_F;
/* Enable Inner checksum for TSO */
- if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+ if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
/* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_TX_OFFLOAD_TSTAMP_F;
return flags;
@@ -277,9 +277,9 @@ cn9k_nix_configure(struct rte_eth_dev *eth_dev)
/* Platform specific checks */
if ((roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) &&
- (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
- ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
- (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+ ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
plt_err("Outer IP and SCTP checksum unsupported");
return -EINVAL;
}
@@ -530,17 +530,17 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
* TSO not supported for earlier chip revisions
*/
if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0())
- dev->tx_offload_capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO);
+ dev->tx_offload_capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
/* 50G and 100G to be supported for board version C0
* and above of CN9K.
*/
if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) {
- dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_50G;
- dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_100G;
+ dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_50G;
+ dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_100G;
}
dev->hwcap = 0;
diff --git a/drivers/net/cnxk/cn9k_rx.c b/drivers/net/cnxk/cn9k_rx.c
index 7d9f1bd61f79..08ee28658bce 100644
--- a/drivers/net/cnxk/cn9k_rx.c
+++ b/drivers/net/cnxk/cn9k_rx.c
@@ -76,12 +76,12 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
nix_eth_rx_burst_mseg[0][0][0][0][0][0];
if (dev->scalar_ena) {
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_burst);
}
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
}
diff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c
index 763f9a14fd79..f35ae8e70438 100644
--- a/drivers/net/cnxk/cn9k_tx.c
+++ b/drivers/net/cnxk/cn9k_tx.c
@@ -76,11 +76,11 @@ cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
if (dev->scalar_ena) {
pick_tx_func(eth_dev, nix_eth_tx_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
} else {
pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
}
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 0e3652ed5109..f6b75645bb69 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -10,7 +10,7 @@ nix_get_rx_offload_capa(struct cnxk_eth_dev *dev)
if (roc_nix_is_vf_or_sdp(&dev->nix) ||
dev->npc.switch_header_type == ROC_PRIV_FLAGS_HIGIG)
- capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
return capa;
}
@@ -28,11 +28,11 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev)
uint32_t speed_capa;
/* Auto negotiation disabled */
- speed_capa = ETH_LINK_SPEED_FIXED;
+ speed_capa = RTE_ETH_LINK_SPEED_FIXED;
if (!roc_nix_is_vf_or_sdp(&dev->nix) && !roc_nix_is_lbk(&dev->nix)) {
- speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
}
return speed_capa;
@@ -54,8 +54,8 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
- dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
- dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+ dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
}
}
@@ -90,7 +90,7 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
struct rte_eth_fc_conf fc_conf = {0};
int rc;
- /* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+ /* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
* by AF driver, update those info in PMD structure.
*/
rc = cnxk_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -98,10 +98,10 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
goto exit;
fc->mode = fc_conf.mode;
- fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_RX_PAUSE);
- fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_TX_PAUSE);
+ fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+ fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
exit:
return rc;
@@ -122,11 +122,11 @@ nix_update_flow_ctrl_config(struct rte_eth_dev *eth_dev)
/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
if (roc_model_is_cn96_ax() &&
dev->npc.switch_header_type != ROC_PRIV_FLAGS_HIGIG &&
- (fc_cfg.mode == RTE_FC_FULL || fc_cfg.mode == RTE_FC_RX_PAUSE)) {
+ (fc_cfg.mode == RTE_ETH_FC_FULL || fc_cfg.mode == RTE_ETH_FC_RX_PAUSE)) {
fc_cfg.mode =
- (fc_cfg.mode == RTE_FC_FULL ||
- fc_cfg.mode == RTE_FC_TX_PAUSE) ?
- RTE_FC_TX_PAUSE : RTE_FC_NONE;
+ (fc_cfg.mode == RTE_ETH_FC_FULL ||
+ fc_cfg.mode == RTE_ETH_FC_TX_PAUSE) ?
+ RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
}
return cnxk_nix_flow_ctrl_set(eth_dev, &fc_cfg);
@@ -169,7 +169,7 @@ nix_sq_max_sqe_sz(struct cnxk_eth_dev *dev)
* Maximum three segments can be supported with W8, Choose
* NIX_MAXSQESZ_W16 for multi segment offload.
*/
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
return NIX_MAXSQESZ_W16;
else
return NIX_MAXSQESZ_W8;
@@ -361,7 +361,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
* These are needed in deriving raw clock value from tsc counter.
* read_clock eth op returns raw clock value.
*/
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
rc = cnxk_nix_tsc_convert(dev);
if (rc) {
plt_err("Failed to calculate delta and freq mult");
@@ -434,24 +434,24 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
dev->ethdev_rss_hf = ethdev_rss;
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
dev->npc.switch_header_type == ROC_PRIV_FLAGS_LEN_90B) {
flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
}
- if (ethdev_rss & ETH_RSS_C_VLAN)
+ if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
- if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
- if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
- if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
- if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -460,34 +460,34 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
if (ethdev_rss & RSS_IPV6_ENABLE)
flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
- if (ethdev_rss & ETH_RSS_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_TCP)
flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
- if (ethdev_rss & ETH_RSS_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_UDP)
flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
- if (ethdev_rss & ETH_RSS_SCTP)
+ if (ethdev_rss & RTE_ETH_RSS_SCTP)
flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
if (ethdev_rss & RSS_IPV6_EX_ENABLE)
flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
- if (ethdev_rss & ETH_RSS_PORT)
+ if (ethdev_rss & RTE_ETH_RSS_PORT)
flowkey_cfg |= FLOW_KEY_TYPE_PORT;
- if (ethdev_rss & ETH_RSS_NVGRE)
+ if (ethdev_rss & RTE_ETH_RSS_NVGRE)
flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
- if (ethdev_rss & ETH_RSS_VXLAN)
+ if (ethdev_rss & RTE_ETH_RSS_VXLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
- if (ethdev_rss & ETH_RSS_GENEVE)
+ if (ethdev_rss & RTE_ETH_RSS_GENEVE)
flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
- if (ethdev_rss & ETH_RSS_GTPU)
+ if (ethdev_rss & RTE_ETH_RSS_GTPU)
flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
return flowkey_cfg;
@@ -513,7 +513,7 @@ nix_rss_default_setup(struct cnxk_eth_dev *dev)
uint64_t rss_hf;
rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
@@ -729,8 +729,8 @@ nix_lso_fmt_setup(struct cnxk_eth_dev *dev)
/* Nothing much to do if offload is not enabled */
if (!(dev->tx_offloads &
- (DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO)))
+ (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))
return 0;
/* Setup LSO formats in AF. Its a no-op if other ethdev has
@@ -778,13 +778,13 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
goto fail_configure;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
plt_err("Unsupported mq rx mode %d", rxmode->mq_mode);
goto fail_configure;
}
- if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
plt_err("Unsupported mq tx mode %d", txmode->mq_mode);
goto fail_configure;
}
@@ -814,7 +814,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
/* Prepare rx cfg */
rx_cfg = ROC_NIX_LF_RX_CFG_DIS_APAD;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_OL4;
rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_IL4;
}
@@ -1191,12 +1191,12 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
* enabled on PF owning this VF
*/
memset(&dev->tstamp, 0, sizeof(struct cnxk_timesync_info));
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
cnxk_eth_dev_ops.timesync_enable(eth_dev);
else
cnxk_eth_dev_ops.timesync_disable(eth_dev);
- if (dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
rc = rte_mbuf_dyn_rx_timestamp_register
(&dev->tstamp.tstamp_dynfield_offset,
&dev->tstamp.rx_tstamp_dynflag);
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 2528b3cdaa0c..53a657f8865d 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -54,41 +54,44 @@
CNXK_NIX_TX_NB_SEG_MAX)
#define CNXK_NIX_RSS_L3_L4_SRC_DST \
- (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY)
+ (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
#define CNXK_NIX_RSS_OFFLOAD \
- (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP | \
- ETH_RSS_SCTP | ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD | \
- CNXK_NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | ETH_RSS_C_VLAN)
+ (RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | \
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_TUNNEL | \
+ RTE_ETH_RSS_L2_PAYLOAD | CNXK_NIX_RSS_L3_L4_SRC_DST | \
+ RTE_ETH_RSS_LEVEL_MASK | RTE_ETH_RSS_C_VLAN)
#define CNXK_NIX_TX_OFFLOAD_CAPA \
- (DEV_TX_OFFLOAD_MBUF_FAST_FREE | DEV_TX_OFFLOAD_MT_LOCKFREE | \
- DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_IPV4_CKSUM)
+ (RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
#define CNXK_NIX_RX_OFFLOAD_CAPA \
- (DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP | \
- DEV_RX_OFFLOAD_VLAN_STRIP)
+ (RTE_ETH_RX_OFFLOAD_CHECKSUM | RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_TIMESTAMP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
#define RSS_IPV4_ENABLE \
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_SCTP)
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
#define RSS_IPV6_ENABLE \
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_SCTP)
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
#define RSS_IPV6_EX_ENABLE \
- (ETH_RSS_IPV6_EX | ETH_RSS_IPV6_TCP_EX | ETH_RSS_IPV6_UDP_EX)
+ (RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX)
#define RSS_MAX_LEVELS 3
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index 37720fb0954e..bf0c6d6b4ad8 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -49,11 +49,11 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
val = atoi(value);
- if (val <= ETH_RSS_RETA_SIZE_64)
+ if (val <= RTE_ETH_RSS_RETA_SIZE_64)
val = ROC_NIX_RSS_RETA_SZ_64;
- else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
+ else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
val = ROC_NIX_RSS_RETA_SZ_128;
- else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
+ else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
val = ROC_NIX_RSS_RETA_SZ_256;
else
val = ROC_NIX_RSS_RETA_SZ_64;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index b6cc5286c6d0..0f6817f75d4a 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -81,25 +81,25 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
- {DEV_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
- {DEV_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
- {DEV_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
- {DEV_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
- {DEV_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
- {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
- {DEV_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
- {DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
- {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
- {DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
- {DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
- {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
- {DEV_RX_OFFLOAD_SECURITY, " Security,"},
- {DEV_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
- {DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
- {DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
- {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+ {RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
+ {RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
+ {RTE_ETH_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
+ {RTE_ETH_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
+ {RTE_ETH_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
+ {RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+ {RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+ {RTE_ETH_RX_OFFLOAD_SECURITY, " Security,"},
+ {RTE_ETH_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
+ {RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
};
static const char *const burst_mode[] = {"Vector Neon, Rx Offloads:",
"Scalar, Rx Offloads:"
@@ -143,28 +143,28 @@ cnxk_nix_tx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
uint64_t flags;
const char *output;
} tx_offload_map[] = {
- {DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
- {DEV_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
- {DEV_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
- {DEV_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
- {DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
- {DEV_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
- {DEV_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
- {DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
- {DEV_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
- {DEV_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
- {DEV_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
- {DEV_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
- {DEV_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
- {DEV_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
- {DEV_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
- {DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
- {DEV_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
- {DEV_TX_OFFLOAD_SECURITY, " Security,"},
- {DEV_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
- {DEV_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
- {DEV_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
- {DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
+ {RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+ {RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
+ {RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
+ {RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
+ {RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
+ {RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
+ {RTE_ETH_TX_OFFLOAD_SECURITY, " Security,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
};
static const char *const burst_mode[] = {"Vector Neon, Tx Offloads:",
"Scalar, Tx Offloads:"
@@ -204,8 +204,8 @@ cnxk_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
{
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
enum rte_eth_fc_mode mode_map[] = {
- RTE_FC_NONE, RTE_FC_RX_PAUSE,
- RTE_FC_TX_PAUSE, RTE_FC_FULL
+ RTE_ETH_FC_NONE, RTE_ETH_FC_RX_PAUSE,
+ RTE_ETH_FC_TX_PAUSE, RTE_ETH_FC_FULL
};
struct roc_nix *nix = &dev->nix;
int mode;
@@ -265,10 +265,10 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
if (fc_conf->mode == fc->mode)
return 0;
- rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_TX_PAUSE);
+ rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
/* Check if TX pause frame is already enabled or not */
if (fc->tx_pause ^ tx_pause) {
@@ -409,13 +409,13 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
* when this feature has not been enabled before.
*/
if (data->dev_started && frame_size > buffsz &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
plt_err("Scatter offload is not enabled for mtu");
goto exit;
}
/* Check <seg size> * <max_seg> >= max_frame */
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
frame_size > (buffsz * CNXK_NIX_RX_NB_SEG_MAX)) {
plt_err("Greater than maximum supported packet length");
goto exit;
@@ -443,9 +443,9 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
frame_size += RTE_ETHER_CRC_LEN;
if (frame_size > RTE_ETHER_MAX_LEN)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
/* Update max_rx_pkt_len */
data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -746,8 +746,8 @@ cnxk_nix_reta_update(struct rte_eth_dev *eth_dev,
}
/* Copy RETA table */
- for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if ((reta_conf[i].mask >> j) & 0x01)
reta[idx] = reta_conf[i].reta[j];
idx++;
@@ -782,8 +782,8 @@ cnxk_nix_reta_query(struct rte_eth_dev *eth_dev,
goto fail;
/* Copy RETA table */
- for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = reta[idx];
idx++;
@@ -816,7 +816,7 @@ cnxk_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
if (rss_conf->rss_key)
roc_nix_rss_key_set(nix, rss_conf->rss_key);
- rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
flowkey_cfg =
diff --git a/drivers/net/cnxk/cnxk_link.c b/drivers/net/cnxk/cnxk_link.c
index 3fdbdba49549..1cff8d56e65b 100644
--- a/drivers/net/cnxk/cnxk_link.c
+++ b/drivers/net/cnxk/cnxk_link.c
@@ -38,7 +38,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
plt_info("Port %d: Link Up - speed %u Mbps - %s",
(int)(eth_dev->data->port_id),
(uint32_t)link->link_speed,
- link->link_duplex == ETH_LINK_FULL_DUPLEX
+ link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX
? "full-duplex"
: "half-duplex");
else
@@ -66,7 +66,7 @@ cnxk_eth_dev_link_status_cb(struct roc_nix *nix, struct roc_nix_link_info *link)
eth_link.link_status = link->status;
eth_link.link_speed = link->speed;
- eth_link.link_autoneg = ETH_LINK_AUTONEG;
+ eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
eth_link.link_duplex = link->full_duplex;
/* Print link info */
@@ -94,17 +94,17 @@ cnxk_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
return 0;
if (roc_nix_is_lbk(&dev->nix)) {
- link.link_status = ETH_LINK_UP;
- link.link_speed = ETH_SPEED_NUM_100G;
- link.link_autoneg = ETH_LINK_FIXED;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
} else {
rc = roc_nix_mac_link_info_get(&dev->nix, &info);
if (rc)
return rc;
link.link_status = info.status;
link.link_speed = info.speed;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
if (info.full_duplex)
link.link_duplex = info.full_duplex;
}
diff --git a/drivers/net/cnxk/cnxk_ptp.c b/drivers/net/cnxk/cnxk_ptp.c
index 449489f599c4..139fea256ccd 100644
--- a/drivers/net/cnxk/cnxk_ptp.c
+++ b/drivers/net/cnxk/cnxk_ptp.c
@@ -227,7 +227,7 @@ cnxk_nix_timesync_enable(struct rte_eth_dev *eth_dev)
dev->rx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
dev->tx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
- dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
rc = roc_nix_ptp_rx_ena_dis(nix, true);
if (!rc) {
@@ -257,7 +257,7 @@ int
cnxk_nix_timesync_disable(struct rte_eth_dev *eth_dev)
{
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
- uint64_t rx_offloads = DEV_RX_OFFLOAD_TIMESTAMP;
+ uint64_t rx_offloads = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
struct roc_nix *nix = &dev->nix;
int rc = 0;
diff --git a/drivers/net/cnxk/cnxk_rte_flow.c b/drivers/net/cnxk/cnxk_rte_flow.c
index 32c1b5dee5fa..ecdfee7b11a6 100644
--- a/drivers/net/cnxk/cnxk_rte_flow.c
+++ b/drivers/net/cnxk/cnxk_rte_flow.c
@@ -69,7 +69,7 @@ npc_rss_action_validate(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
plt_err("multi-queue mode is disabled");
return -ENOTSUP;
}
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 7c89a028bf16..dee618a0db5f 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -28,32 +28,32 @@
#define CXGBE_LINK_STATUS_POLL_CNT 100 /* Max number of times to poll */
#define CXGBE_DEFAULT_RSS_KEY_LEN 40 /* 320-bits */
-#define CXGBE_RSS_HF_IPV4_MASK (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_OTHER)
-#define CXGBE_RSS_HF_IPV6_MASK (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_IPV6_EX)
-#define CXGBE_RSS_HF_TCP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_IPV6_TCP_EX)
-#define CXGBE_RSS_HF_UDP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_UDP_EX)
-#define CXGBE_RSS_HF_ALL (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP)
+#define CXGBE_RSS_HF_IPV4_MASK (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
+#define CXGBE_RSS_HF_IPV6_MASK (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_IPV6_EX)
+#define CXGBE_RSS_HF_TCP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_IPV6_TCP_EX)
+#define CXGBE_RSS_HF_UDP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
+#define CXGBE_RSS_HF_ALL (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP)
/* Tx/Rx Offloads supported */
-#define CXGBE_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define CXGBE_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+#define CXGBE_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define CXGBE_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
/* Devargs filtermode and filtermask representation */
enum cxgbe_devargs_filter_mode_flags {
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 177eca397600..78c1381fdb47 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -231,9 +231,9 @@ int cxgbe_dev_link_update(struct rte_eth_dev *eth_dev,
}
new_link.link_status = cxgbe_force_linkup(adapter) ?
- ETH_LINK_UP : pi->link_cfg.link_ok;
+ RTE_ETH_LINK_UP : pi->link_cfg.link_ok;
new_link.link_autoneg = (lc->link_caps & FW_PORT_CAP32_ANEG) ? 1 : 0;
- new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
new_link.link_speed = t4_fwcap_to_speed(lc->link_caps);
return rte_eth_linkstatus_set(eth_dev, &new_link);
@@ -316,10 +316,10 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
/* set to jumbo mode if needed */
if (new_mtu > CXGBE_ETH_MAX_LEN)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
@@ -396,7 +396,7 @@ int cxgbe_dev_start(struct rte_eth_dev *eth_dev)
goto out;
}
- if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
eth_dev->data->scattered_rx = 1;
else
eth_dev->data->scattered_rx = 0;
@@ -460,9 +460,9 @@ int cxgbe_dev_configure(struct rte_eth_dev *eth_dev)
CXGBE_FUNC_TRACE();
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (!(adapter->flags & FW_QUEUE_BOUND)) {
err = cxgbe_setup_sge_fwevtq(adapter);
@@ -685,10 +685,10 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
/* Set to jumbo mode if necessary */
if (pkt_len > CXGBE_ETH_MAX_LEN)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
err = t4_sge_alloc_rxq(adapter, &rxq->rspq, false, eth_dev, msi_idx,
&rxq->fl, NULL,
@@ -1079,13 +1079,13 @@ static int cxgbe_flow_ctrl_get(struct rte_eth_dev *eth_dev,
rx_pause = 1;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -1098,12 +1098,12 @@ static int cxgbe_flow_ctrl_set(struct rte_eth_dev *eth_dev,
u8 tx_pause = 0, rx_pause = 0;
int ret;
- if (fc_conf->mode == RTE_FC_FULL) {
+ if (fc_conf->mode == RTE_ETH_FC_FULL) {
tx_pause = 1;
rx_pause = 1;
- } else if (fc_conf->mode == RTE_FC_TX_PAUSE) {
+ } else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE) {
tx_pause = 1;
- } else if (fc_conf->mode == RTE_FC_RX_PAUSE) {
+ } else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE) {
rx_pause = 1;
}
@@ -1199,9 +1199,9 @@ static int cxgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
rss_hf |= CXGBE_RSS_HF_IPV6_MASK;
if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN) {
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (flags & F_FW_RSS_VI_CONFIG_CMD_UDPEN)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
}
if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN)
@@ -1245,8 +1245,8 @@ static int cxgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
rte_memcpy(rss, pi->rss, pi->rss_size * sizeof(u16));
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (!(reta_conf[idx].mask & (1ULL << shift)))
continue;
@@ -1276,8 +1276,8 @@ static int cxgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (!(reta_conf[idx].mask & (1ULL << shift)))
continue;
@@ -1478,7 +1478,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
if (lc->pcaps & FW_PORT_CAP32_SPEED_100G) {
if (capa_arr) {
- capa_arr[num].speed = ETH_SPEED_NUM_100G;
+ capa_arr[num].speed = RTE_ETH_SPEED_NUM_100G;
capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS);
}
@@ -1487,7 +1487,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
if (lc->pcaps & FW_PORT_CAP32_SPEED_50G) {
if (capa_arr) {
- capa_arr[num].speed = ETH_SPEED_NUM_50G;
+ capa_arr[num].speed = RTE_ETH_SPEED_NUM_50G;
capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER);
}
@@ -1496,7 +1496,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
if (lc->pcaps & FW_PORT_CAP32_SPEED_25G) {
if (capa_arr) {
- capa_arr[num].speed = ETH_SPEED_NUM_25G;
+ capa_arr[num].speed = RTE_ETH_SPEED_NUM_25G;
capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS);
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 6dd1bf1f836e..54723edc2144 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1671,7 +1671,7 @@ int cxgbe_link_start(struct port_info *pi)
* that step explicitly.
*/
ret = t4_set_rxmode(adapter, adapter->mbox, pi->viid, mtu, -1, -1, -1,
- !!(conf_offloads & DEV_RX_OFFLOAD_VLAN_STRIP),
+ !!(conf_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP),
true);
if (ret == 0) {
ret = cxgbe_mpstcam_modify(pi, (int)pi->xact_addr_filt,
@@ -1695,7 +1695,7 @@ int cxgbe_link_start(struct port_info *pi)
}
if (ret == 0 && cxgbe_force_linkup(adapter))
- pi->eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ pi->eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return ret;
}
@@ -1726,10 +1726,10 @@ int cxgbe_write_rss_conf(const struct port_info *pi, uint64_t rss_hf)
if (rss_hf & CXGBE_RSS_HF_IPV4_MASK)
flags |= F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN |
F_FW_RSS_VI_CONFIG_CMD_UDPEN;
@@ -1866,7 +1866,7 @@ static void fw_caps_to_speed_caps(enum fw_port_type port_type,
{
#define SET_SPEED(__speed_name) \
do { \
- *speed_caps |= ETH_LINK_ ## __speed_name; \
+ *speed_caps |= RTE_ETH_LINK_ ## __speed_name; \
} while (0)
#define FW_CAPS_TO_SPEED(__fw_name) \
@@ -1953,7 +1953,7 @@ void cxgbe_get_speed_caps(struct port_info *pi, u32 *speed_caps)
speed_caps);
if (!(pi->link_cfg.pcaps & FW_PORT_CAP32_ANEG))
- *speed_caps |= ETH_LINK_SPEED_FIXED;
+ *speed_caps |= RTE_ETH_LINK_SPEED_FIXED;
}
/**
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index e5f7721dc4b3..eddb818c4861 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -366,7 +366,7 @@ static unsigned int refill_fl_usembufs(struct adapter *adap, struct sge_fl *q,
int ret, i;
struct rte_pktmbuf_pool_private *mbp_priv;
u8 jumbo_en = rxq->rspq.eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
/* Use jumbo mtu buffers if mbuf data room size can fit jumbo data. */
mbp_priv = rte_mempool_get_priv(rxq->rspq.mb_pool);
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 27d670f843d2..c466256137a3 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,30 +54,30 @@
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_SCATTER;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
/* Rx offloads which cannot be disabled */
static uint64_t dev_rx_offloads_nodis =
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Supported Tx offloads */
static uint64_t dev_tx_offloads_sup =
- DEV_TX_OFFLOAD_MT_LOCKFREE |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Tx offloads which cannot be disabled */
static uint64_t dev_tx_offloads_nodis =
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Keep track of whether QMAN and BMAN have been globally initialized */
static int is_global_init;
@@ -189,10 +189,10 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (frame_size > DPAA_ETH_MAX_LEN)
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -238,7 +238,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
uint32_t max_len;
DPAA_PMD_DEBUG("enabling jumbo");
@@ -259,7 +259,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE;
}
- if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
DPAA_PMD_DEBUG("enabling scatter mode");
fman_if_set_sg(dev->process_private, 1);
dev->data->scattered_rx = 1;
@@ -304,43 +304,43 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
/* Configure link only if link is UP*/
if (link->link_status) {
- if (eth_conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (eth_conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
/* Start autoneg only if link is not in autoneg mode */
if (!link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
- } else if (eth_conf->link_speeds & ETH_LINK_SPEED_FIXED) {
- switch (eth_conf->link_speeds & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_10M_HD:
- speed = ETH_SPEED_NUM_10M;
- duplex = ETH_LINK_HALF_DUPLEX;
+ } else if (eth_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+ switch (eth_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_10M_HD:
+ speed = RTE_ETH_SPEED_NUM_10M;
+ duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
- case ETH_LINK_SPEED_10M:
- speed = ETH_SPEED_NUM_10M;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_10M:
+ speed = RTE_ETH_SPEED_NUM_10M;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_100M_HD:
- speed = ETH_SPEED_NUM_100M;
- duplex = ETH_LINK_HALF_DUPLEX;
+ case RTE_ETH_LINK_SPEED_100M_HD:
+ speed = RTE_ETH_SPEED_NUM_100M;
+ duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
- case ETH_LINK_SPEED_100M:
- speed = ETH_SPEED_NUM_100M;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_100M:
+ speed = RTE_ETH_SPEED_NUM_100M;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_1G:
- speed = ETH_SPEED_NUM_1G;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_1G:
+ speed = RTE_ETH_SPEED_NUM_1G;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_2_5G:
- speed = ETH_SPEED_NUM_2_5G;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_2_5G:
+ speed = RTE_ETH_SPEED_NUM_2_5G;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_10G:
- speed = ETH_SPEED_NUM_10G;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_10G:
+ speed = RTE_ETH_SPEED_NUM_10G;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
default:
- speed = ETH_SPEED_NUM_NONE;
- duplex = ETH_LINK_FULL_DUPLEX;
+ speed = RTE_ETH_SPEED_NUM_NONE;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
}
/* Set link speed */
@@ -556,30 +556,30 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_mac_addrs = DPAA_MAX_MAC_FILTER;
dev_info->max_hash_mac_addrs = 0;
dev_info->max_vfs = 0;
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
if (fif->mac_type == fman_mac_1g) {
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
- | ETH_LINK_SPEED_10M
- | ETH_LINK_SPEED_100M_HD
- | ETH_LINK_SPEED_100M
- | ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G;
} else if (fif->mac_type == fman_mac_2_5g) {
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
- | ETH_LINK_SPEED_10M
- | ETH_LINK_SPEED_100M_HD
- | ETH_LINK_SPEED_100M
- | ETH_LINK_SPEED_1G
- | ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G
+ | RTE_ETH_LINK_SPEED_2_5G;
} else if (fif->mac_type == fman_mac_10g) {
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
- | ETH_LINK_SPEED_10M
- | ETH_LINK_SPEED_100M_HD
- | ETH_LINK_SPEED_100M
- | ETH_LINK_SPEED_1G
- | ETH_LINK_SPEED_2_5G
- | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G
+ | RTE_ETH_LINK_SPEED_2_5G
+ | RTE_ETH_LINK_SPEED_10G;
} else {
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -612,13 +612,13 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
- {DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
- {DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
- {DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
- {DEV_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
- {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+ {RTE_ETH_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
+ {RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+ {RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+ {RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+ {RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
};
/* Update Rx offload info */
@@ -645,14 +645,14 @@ dpaa_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} tx_offload_map[] = {
- {DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
- {DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
- {DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
- {DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
- {DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
- {DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
- {DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+ {RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+ {RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+ {RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+ {RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
};
/* Update Tx offload info */
@@ -686,7 +686,7 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
ret = dpaa_get_link_status(__fif->node_name, link);
if (ret)
return ret;
- if (link->link_status == ETH_LINK_DOWN &&
+ if (link->link_status == RTE_ETH_LINK_DOWN &&
wait_to_complete)
rte_delay_ms(CHECK_INTERVAL);
else
@@ -697,15 +697,15 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
}
if (ioctl_version < 2) {
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
if (fif->mac_type == fman_mac_1g)
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
else if (fif->mac_type == fman_mac_2_5g)
- link->link_speed = ETH_SPEED_NUM_2_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
else if (fif->mac_type == fman_mac_10g)
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
else
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -981,7 +981,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
;
} else if (dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SCATTER) {
+ RTE_ETH_RX_OFFLOAD_SCATTER) {
if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
buffsz * DPAA_SGT_MAX_ENTRIES) {
DPAA_PMD_ERR("max RxPkt size %d too big to fit "
@@ -1303,7 +1303,7 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
- dpaa_update_link_status(__fif->node_name, ETH_LINK_DOWN);
+ dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
else
return dpaa_eth_dev_stop(dev);
return 0;
@@ -1319,7 +1319,7 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
- dpaa_update_link_status(__fif->node_name, ETH_LINK_UP);
+ dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
else
dpaa_eth_dev_start(dev);
return 0;
@@ -1349,10 +1349,10 @@ dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
return -EINVAL;
}
- if (fc_conf->mode == RTE_FC_NONE) {
+ if (fc_conf->mode == RTE_ETH_FC_NONE) {
return 0;
- } else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
- fc_conf->mode == RTE_FC_FULL) {
+ } else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE ||
+ fc_conf->mode == RTE_ETH_FC_FULL) {
fman_if_set_fc_threshold(dev->process_private,
fc_conf->high_water,
fc_conf->low_water,
@@ -1396,11 +1396,11 @@ dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
}
ret = fman_if_get_fc_threshold(dev->process_private);
if (ret) {
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
fc_conf->pause_time =
fman_if_get_fc_quanta(dev->process_private);
} else {
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return 0;
@@ -1663,10 +1663,10 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf,
fc_conf = dpaa_intf->fc_conf;
ret = fman_if_get_fc_threshold(fman_intf);
if (ret) {
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
fc_conf->pause_time = fman_if_get_fc_quanta(fman_intf);
} else {
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b5728e09c29f..c868e9d5bd9b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -74,11 +74,11 @@
#define DPAA_DEBUG_FQ_TX_ERROR 1
#define DPAA_RSS_OFFLOAD_ALL ( \
- ETH_RSS_L2_PAYLOAD | \
- ETH_RSS_IP | \
- ETH_RSS_UDP | \
- ETH_RSS_TCP | \
- ETH_RSS_SCTP)
+ RTE_ETH_RSS_L2_PAYLOAD | \
+ RTE_ETH_RSS_IP | \
+ RTE_ETH_RSS_UDP | \
+ RTE_ETH_RSS_TCP | \
+ RTE_ETH_RSS_SCTP)
#define DPAA_TX_CKSUM_OFFLOAD_MASK ( \
PKT_TX_IP_CKSUM | \
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index c5b5ec869519..1ccd03602790 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -394,7 +394,7 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
if (req_dist_set % 2 != 0) {
dist_field = 1U << loop;
switch (dist_field) {
- case ETH_RSS_L2_PAYLOAD:
+ case RTE_ETH_RSS_L2_PAYLOAD:
if (l2_configured)
break;
@@ -404,9 +404,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_ETH;
break;
- case ETH_RSS_IPV4:
- case ETH_RSS_FRAG_IPV4:
- case ETH_RSS_NONFRAG_IPV4_OTHER:
+ case RTE_ETH_RSS_IPV4:
+ case RTE_ETH_RSS_FRAG_IPV4:
+ case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
if (ipv4_configured)
break;
@@ -415,10 +415,10 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_IPV4;
break;
- case ETH_RSS_IPV6:
- case ETH_RSS_FRAG_IPV6:
- case ETH_RSS_NONFRAG_IPV6_OTHER:
- case ETH_RSS_IPV6_EX:
+ case RTE_ETH_RSS_IPV6:
+ case RTE_ETH_RSS_FRAG_IPV6:
+ case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+ case RTE_ETH_RSS_IPV6_EX:
if (ipv6_configured)
break;
@@ -427,9 +427,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_IPV6;
break;
- case ETH_RSS_NONFRAG_IPV4_TCP:
- case ETH_RSS_NONFRAG_IPV6_TCP:
- case ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_IPV6_TCP_EX:
if (tcp_configured)
break;
@@ -438,9 +438,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_TCP;
break;
- case ETH_RSS_NONFRAG_IPV4_UDP:
- case ETH_RSS_NONFRAG_IPV6_UDP:
- case ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_IPV6_UDP_EX:
if (udp_configured)
break;
@@ -449,8 +449,8 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_UDP;
break;
- case ETH_RSS_NONFRAG_IPV4_SCTP:
- case ETH_RSS_NONFRAG_IPV6_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
if (sctp_configured)
break;
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 641e7027f12e..7c92b2a42e3f 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -216,7 +216,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
if (req_dist_set % 2 != 0) {
dist_field = 1ULL << loop;
switch (dist_field) {
- case ETH_RSS_L2_PAYLOAD:
+ case RTE_ETH_RSS_L2_PAYLOAD:
if (l2_configured)
break;
@@ -233,7 +233,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_MPLS:
+ case RTE_ETH_RSS_MPLS:
if (mpls_configured)
break;
@@ -270,13 +270,13 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_IPV4:
- case ETH_RSS_FRAG_IPV4:
- case ETH_RSS_NONFRAG_IPV4_OTHER:
- case ETH_RSS_IPV6:
- case ETH_RSS_FRAG_IPV6:
- case ETH_RSS_NONFRAG_IPV6_OTHER:
- case ETH_RSS_IPV6_EX:
+ case RTE_ETH_RSS_IPV4:
+ case RTE_ETH_RSS_FRAG_IPV4:
+ case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
+ case RTE_ETH_RSS_IPV6:
+ case RTE_ETH_RSS_FRAG_IPV6:
+ case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+ case RTE_ETH_RSS_IPV6_EX:
if (l3_configured)
break;
@@ -314,12 +314,12 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_NONFRAG_IPV4_TCP:
- case ETH_RSS_NONFRAG_IPV6_TCP:
- case ETH_RSS_NONFRAG_IPV4_UDP:
- case ETH_RSS_NONFRAG_IPV6_UDP:
- case ETH_RSS_IPV6_TCP_EX:
- case ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_IPV6_UDP_EX:
if (l4_configured)
break;
@@ -346,8 +346,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_NONFRAG_IPV4_SCTP:
- case ETH_RSS_NONFRAG_IPV6_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
if (sctp_configured)
break;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e22..23bb985b95e9 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -38,34 +38,34 @@
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_SCTP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_TIMESTAMP;
+ RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP;
/* Rx offloads which cannot be disabled */
static uint64_t dev_rx_offloads_nodis =
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_SCATTER;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
/* Supported Tx offloads */
static uint64_t dev_tx_offloads_sup =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MT_LOCKFREE |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Tx offloads which cannot be disabled */
static uint64_t dev_tx_offloads_nodis =
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* enable timestamp in mbuf */
bool dpaa2_enable_ts[RTE_MAX_ETHPORTS];
@@ -143,7 +143,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
PMD_INIT_FUNC_TRACE();
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* VLAN Filter not avaialble */
if (!priv->max_vlan_filters) {
DPAA2_PMD_INFO("VLAN filter not available");
@@ -151,7 +151,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
if (dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ret = dpni_enable_vlan_filter(dpni, CMD_PRI_LOW,
priv->token, true);
else
@@ -252,13 +252,13 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_rx_offloads_nodis;
dev_info->tx_offload_capa = dev_tx_offloads_sup |
dev_tx_offloads_nodis;
- dev_info->speed_capa = ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_10G;
dev_info->max_hash_mac_addrs = 0;
dev_info->max_vfs = 0;
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
dev_info->flow_type_rss_offloads = DPAA2_RSS_OFFLOAD_ALL;
dev_info->default_rxportconf.burst_size = dpaa2_dqrr_size;
@@ -271,10 +271,10 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
if (dpaa2_svr_family == SVR_LX2160A) {
- dev_info->speed_capa |= ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
}
return 0;
@@ -292,16 +292,16 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_CHECKSUM, " Checksum,"},
- {DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
- {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
- {DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
- {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
- {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
- {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
- {DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
+ {RTE_ETH_RX_OFFLOAD_CHECKSUM, " Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
+ {RTE_ETH_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
+ {RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+ {RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"},
+ {RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"}
};
/* Update Rx offload info */
@@ -328,15 +328,15 @@ dpaa2_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} tx_offload_map[] = {
- {DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
- {DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
- {DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
- {DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
- {DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
- {DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
- {DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
- {DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+ {RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+ {RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+ {RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+ {RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+ {RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
};
/* Update Tx offload info */
@@ -559,7 +559,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
if (eth_conf->rxmode.max_rx_pkt_len <= DPAA2_MAX_RX_PKT_LEN) {
ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
priv->token, eth_conf->rxmode.max_rx_pkt_len
@@ -578,7 +578,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
}
- if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (eth_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) {
ret = dpaa2_setup_flow_dist(dev,
eth_conf->rx_adv_conf.rss_conf.rss_hf,
@@ -592,12 +592,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
}
- if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
rx_l3_csum_offload = true;
- if ((rx_offloads & DEV_RX_OFFLOAD_UDP_CKSUM) ||
- (rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) ||
- (rx_offloads & DEV_RX_OFFLOAD_SCTP_CKSUM))
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) ||
+ (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) ||
+ (rx_offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM))
rx_l4_csum_offload = true;
ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -615,7 +615,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
#if !defined(RTE_LIBRTE_IEEE1588)
- if (rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
#endif
{
ret = rte_mbuf_dyn_rx_timestamp_register(
@@ -628,12 +628,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
dpaa2_enable_ts[dev->data->port_id] = true;
}
- if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
tx_l3_csum_offload = true;
- if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ||
- (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
- (tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ||
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM))
tx_l4_csum_offload = true;
ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -665,8 +665,8 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
}
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
- dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+ dpaa2_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
dpaa2_tm_init(dev);
@@ -1477,10 +1477,10 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (frame_size > DPAA2_ETH_MAX_LEN)
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -1881,7 +1881,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret);
return -1;
}
- if (state.up == ETH_LINK_DOWN &&
+ if (state.up == RTE_ETH_LINK_DOWN &&
wait_to_complete)
rte_delay_ms(CHECK_INTERVAL);
else
@@ -1893,9 +1893,9 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
link.link_speed = state.rate;
if (state.options & DPNI_LINK_OPT_HALF_DUPLEX)
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
else
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
ret = rte_eth_linkstatus_set(dev, &link);
if (ret == -1)
@@ -2056,9 +2056,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
* No TX side flow control (send Pause frame disabled)
*/
if (!(state.options & DPNI_LINK_OPT_ASYM_PAUSE))
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
} else {
/* DPNI_LINK_OPT_PAUSE not set
* if ASYM_PAUSE set,
@@ -2068,9 +2068,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
* Flow control disabled
*/
if (state.options & DPNI_LINK_OPT_ASYM_PAUSE)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return ret;
@@ -2114,14 +2114,14 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
/* update cfg with fc_conf */
switch (fc_conf->mode) {
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
/* Full flow control;
* OPT_PAUSE set, ASYM_PAUSE not set
*/
cfg.options |= DPNI_LINK_OPT_PAUSE;
cfg.options &= ~DPNI_LINK_OPT_ASYM_PAUSE;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
/* Enable RX flow control
* OPT_PAUSE not set;
* ASYM_PAUSE set;
@@ -2129,7 +2129,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
cfg.options &= ~DPNI_LINK_OPT_PAUSE;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
/* Enable TX Flow control
* OPT_PAUSE set
* ASYM_PAUSE set
@@ -2137,7 +2137,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
cfg.options |= DPNI_LINK_OPT_PAUSE;
cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
break;
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
/* Disable Flow control
* OPT_PAUSE not set
* ASYM_PAUSE not set
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index b9c729f6cdc0..ca75a2175524 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -65,12 +65,12 @@
#define DPAA2_TX_CONF_ENABLE 0x08
#define DPAA2_RSS_OFFLOAD_ALL ( \
- ETH_RSS_L2_PAYLOAD | \
- ETH_RSS_IP | \
- ETH_RSS_UDP | \
- ETH_RSS_TCP | \
- ETH_RSS_SCTP | \
- ETH_RSS_MPLS)
+ RTE_ETH_RSS_L2_PAYLOAD | \
+ RTE_ETH_RSS_IP | \
+ RTE_ETH_RSS_UDP | \
+ RTE_ETH_RSS_TCP | \
+ RTE_ETH_RSS_SCTP | \
+ RTE_ETH_RSS_MPLS)
/* LX2 FRC Parsed values (Little Endian) */
#define DPAA2_PKT_TYPE_ETHER 0x0060
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f40369e2c3f9..7c77243b5d1a 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -773,7 +773,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
#endif
if (eth_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP)
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
rte_vlan_strip(bufs[num_rx]);
dq_storage++;
@@ -987,7 +987,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
eth_data->port_id);
if (eth_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP) {
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
rte_vlan_strip(bufs[num_rx]);
}
@@ -1230,7 +1230,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if (unlikely(((*bufs)->ol_flags
& PKT_TX_VLAN_PKT) ||
(eth_data->dev_conf.txmode.offloads
- & DEV_TX_OFFLOAD_VLAN_INSERT))) {
+ & RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
ret = rte_vlan_insert(bufs);
if (ret)
goto send_n_return;
@@ -1273,7 +1273,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN_PKT) ||
(eth_data->dev_conf.txmode.offloads
- & DEV_TX_OFFLOAD_VLAN_INSERT))) {
+ & RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
int ret = rte_vlan_insert(bufs);
if (ret)
goto send_n_return;
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 3b4d9c3ee6f4..ca488fea966f 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -81,15 +81,15 @@
#define E1000_FTQF_QUEUE_ENABLE 0x00000100
#define IGB_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
/*
* The overhead from MTU to max frame size.
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b0275..6fb205f8577f 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -599,8 +599,8 @@ eth_em_start(struct rte_eth_dev *dev)
e1000_clear_hw_cntrs_base_generic(hw);
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = eth_em_vlan_offload_set(dev, mask);
if (ret) {
PMD_INIT_LOG(ERR, "Unable to update vlan offload");
@@ -613,39 +613,39 @@ eth_em_start(struct rte_eth_dev *dev)
/* Setup link speed and duplex */
speeds = &dev->data->dev_conf.link_speeds;
- if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
hw->mac.autoneg = 1;
} else {
num_speeds = 0;
- autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+ autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
/* Reset */
hw->phy.autoneg_advertised = 0;
- if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+ if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
num_speeds = -1;
goto error_invalid_config;
}
- if (*speeds & ETH_LINK_SPEED_10M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_10M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M) {
hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M) {
hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_1G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_1G) {
hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
num_speeds++;
}
@@ -1104,9 +1104,9 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_mtu_seg_max = EM_TX_MAX_MTU_SEG,
};
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G;
/* Preferred queue parameters */
dev_info->default_rxportconf.nb_queues = 1;
@@ -1164,17 +1164,17 @@ eth_em_link_update(struct rte_eth_dev *dev, int wait_to_complete)
uint16_t duplex, speed;
hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
link.link_duplex = (duplex == FULL_DUPLEX) ?
- ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
+ RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
link.link_speed = speed;
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
} else {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_status = ETH_LINK_DOWN;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -1426,15 +1426,15 @@ eth_em_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if(mask & ETH_VLAN_STRIP_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
em_vlan_hw_strip_enable(dev);
else
em_vlan_hw_strip_disable(dev);
}
- if(mask & ETH_VLAN_FILTER_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
em_vlan_hw_filter_enable(dev);
else
em_vlan_hw_filter_disable(dev);
@@ -1603,7 +1603,7 @@ eth_em_interrupt_action(struct rte_eth_dev *dev,
if (link.link_status) {
PMD_INIT_LOG(INFO, " Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id, link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down", dev->data->port_id);
@@ -1685,13 +1685,13 @@ eth_em_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
rx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -1820,11 +1820,11 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if (frame_size > E1000_ETH_MAX_LEN) {
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
} else {
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
rctl &= ~E1000_RCTL_LPE;
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd0074..cf672c32277b 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -93,7 +93,7 @@ struct em_rx_queue {
struct em_rx_entry *sw_ring; /**< address of RX software ring. */
struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */
- uint64_t offloads; /**< Offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Offloads of RTE_ETH_RX_OFFLOAD_* */
uint16_t nb_rx_desc; /**< number of RX descriptors. */
uint16_t rx_tail; /**< current value of RDT register. */
uint16_t nb_rx_hold; /**< number of held free RX desc. */
@@ -172,7 +172,7 @@ struct em_tx_queue {
uint8_t wthresh; /**< Write-back threshold register. */
struct em_ctx_info ctx_cache;
/**< Hardware context history.*/
- uint64_t offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
};
#if 1
@@ -1168,11 +1168,11 @@ em_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
RTE_SET_USED(dev);
tx_offload_capa =
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
return tx_offload_capa;
}
@@ -1367,15 +1367,15 @@ em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
max_rx_pktlen = em_get_max_pktlen(dev);
rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
if (max_rx_pktlen > RTE_ETHER_MAX_LEN)
- rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ rx_offload_capa |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
return rx_offload_capa;
}
@@ -1468,7 +1468,7 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
rxq->rx_free_thresh = rx_conf->rx_free_thresh;
rxq->queue_id = queue_idx;
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1806,7 +1806,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1839,7 +1839,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
* to avoid splitting packets that don't fit into
* one buffer.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ||
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME ||
rctl_bsize < RTE_ETHER_MAX_LEN) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
@@ -1849,7 +1849,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
}
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->rx_pkt_burst = eth_em_recv_scattered_pkts;
@@ -1862,7 +1862,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
*/
rxcsum = E1000_READ_REG(hw, E1000_RXCSUM);
- if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= E1000_RXCSUM_IPOFL;
else
rxcsum &= ~E1000_RXCSUM_IPOFL;
@@ -1874,21 +1874,21 @@ eth_em_rx_init(struct rte_eth_dev *dev)
if ((hw->mac.type == e1000_ich9lan ||
hw->mac.type == e1000_pch2lan ||
hw->mac.type == e1000_ich10lan) &&
- rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
u32 rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(0));
E1000_WRITE_REG(hw, E1000_RXDCTL(0), rxdctl | 3);
E1000_WRITE_REG(hw, E1000_ERT, 0x100 | (1 << 13));
}
if (hw->mac.type == e1000_pch2lan) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
e1000_lv_jumbo_workaround_ich8lan(hw, TRUE);
else
e1000_lv_jumbo_workaround_ich8lan(hw, FALSE);
}
/* Setup the Receive Control Register. */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
else
rctl |= E1000_RCTL_SECRC; /* Strip Ethernet CRC. */
@@ -1908,7 +1908,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
rctl |= E1000_RCTL_LPE;
else
rctl &= ~E1000_RCTL_LPE;
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 10ee0f33415a..03509c960326 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -1082,21 +1082,21 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
uint16_t nb_rx_q = dev->data->nb_rx_queues;
uint16_t nb_tx_q = dev->data->nb_tx_queues;
- if ((rx_mq_mode & ETH_MQ_RX_DCB_FLAG) ||
- tx_mq_mode == ETH_MQ_TX_DCB ||
- tx_mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if ((rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) ||
+ tx_mq_mode == RTE_ETH_MQ_TX_DCB ||
+ tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
PMD_INIT_LOG(ERR, "DCB mode is not supported.");
return -EINVAL;
}
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* Check multi-queue mode.
- * To no break software we accept ETH_MQ_RX_NONE as this might
+ * To no break software we accept RTE_ETH_MQ_RX_NONE as this might
* be used to turn off VLAN filter.
*/
- if (rx_mq_mode == ETH_MQ_RX_NONE ||
- rx_mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+ if (rx_mq_mode == RTE_ETH_MQ_RX_NONE ||
+ rx_mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
} else {
/* Only support one queue on VFs.
@@ -1108,12 +1108,12 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
/* TX mode is not used here, so mode might be ignored.*/
- if (tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+ if (tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
/* SRIOV only works in VMDq enable mode */
PMD_INIT_LOG(WARNING, "SRIOV is active,"
" TX mode %d is not supported. "
" Driver will behave as %d mode.",
- tx_mq_mode, ETH_MQ_TX_VMDQ_ONLY);
+ tx_mq_mode, RTE_ETH_MQ_TX_VMDQ_ONLY);
}
/* check valid queue number */
@@ -1126,17 +1126,17 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
/* To no break software that set invalid mode, only display
* warning if invalid mode is used.
*/
- if (rx_mq_mode != ETH_MQ_RX_NONE &&
- rx_mq_mode != ETH_MQ_RX_VMDQ_ONLY &&
- rx_mq_mode != ETH_MQ_RX_RSS) {
+ if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rx_mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY &&
+ rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
/* RSS together with VMDq not supported*/
PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
rx_mq_mode);
return -EINVAL;
}
- if (tx_mq_mode != ETH_MQ_TX_NONE &&
- tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+ if (tx_mq_mode != RTE_ETH_MQ_TX_NONE &&
+ tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
PMD_INIT_LOG(WARNING, "TX mode %d is not supported."
" Due to txmode is meaningless in this"
" driver, just ignore.",
@@ -1155,8 +1155,8 @@ eth_igb_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multipe queue mode checking */
ret = igb_check_mq_mode(dev);
@@ -1296,8 +1296,8 @@ eth_igb_start(struct rte_eth_dev *dev)
/*
* VLAN Offload Settings
*/
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = eth_igb_vlan_offload_set(dev, mask);
if (ret) {
PMD_INIT_LOG(ERR, "Unable to set vlan offload");
@@ -1305,7 +1305,7 @@ eth_igb_start(struct rte_eth_dev *dev)
return ret;
}
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
/* Enable VLAN filter since VMDq always use VLAN filter */
igb_vmdq_vlan_hw_filter_enable(dev);
}
@@ -1319,39 +1319,39 @@ eth_igb_start(struct rte_eth_dev *dev)
/* Setup link speed and duplex */
speeds = &dev->data->dev_conf.link_speeds;
- if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
hw->mac.autoneg = 1;
} else {
num_speeds = 0;
- autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+ autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
/* Reset */
hw->phy.autoneg_advertised = 0;
- if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+ if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
num_speeds = -1;
goto error_invalid_config;
}
- if (*speeds & ETH_LINK_SPEED_10M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_10M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M) {
hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M) {
hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_1G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_1G) {
hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
num_speeds++;
}
@@ -2194,21 +2194,21 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
case e1000_82576:
dev_info->max_rx_queues = 16;
dev_info->max_tx_queues = 16;
- dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
dev_info->vmdq_queue_num = 16;
break;
case e1000_82580:
dev_info->max_rx_queues = 8;
dev_info->max_tx_queues = 8;
- dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
dev_info->vmdq_queue_num = 8;
break;
case e1000_i350:
dev_info->max_rx_queues = 8;
dev_info->max_tx_queues = 8;
- dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
dev_info->vmdq_queue_num = 8;
break;
@@ -2234,7 +2234,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
return -EINVAL;
}
dev_info->hash_key_size = IGB_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = IGB_RSS_OFFLOAD_ALL;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -2260,9 +2260,9 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = rx_desc_lim;
dev_info->tx_desc_lim = tx_desc_lim;
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G;
dev_info->max_mtu = dev_info->max_rx_pktlen - E1000_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2305,12 +2305,12 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_rx_bufsize = 256; /* See BSIZE field of RCTL register. */
dev_info->max_rx_pktlen = 0x3FFF; /* See RLPML register. */
dev_info->max_mac_addrs = hw->mac.rar_entry_count;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
switch (hw->mac.type) {
case e1000_vfadapt:
dev_info->max_rx_queues = 2;
@@ -2411,17 +2411,17 @@ eth_igb_link_update(struct rte_eth_dev *dev, int wait_to_complete)
uint16_t duplex, speed;
hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
link.link_duplex = (duplex == FULL_DUPLEX) ?
- ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
+ RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
link.link_speed = speed;
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
} else if (!link_check) {
link.link_speed = 0;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_status = ETH_LINK_DOWN;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -2597,7 +2597,7 @@ eth_igb_vlan_tpid_set(struct rte_eth_dev *dev,
qinq &= E1000_CTRL_EXT_EXT_VLAN;
/* only outer TPID of double VLAN can be configured*/
- if (qinq && vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (qinq && vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
reg = E1000_READ_REG(hw, E1000_VET);
reg = (reg & (~E1000_VET_VET_EXT)) |
((uint32_t)tpid << E1000_VET_VET_EXT_SHIFT);
@@ -2686,7 +2686,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
E1000_WRITE_REG(hw, E1000_RLPML,
dev->data->dev_conf.rxmode.max_rx_pkt_len);
}
@@ -2704,7 +2704,7 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
E1000_WRITE_REG(hw, E1000_RLPML,
dev->data->dev_conf.rxmode.max_rx_pkt_len +
VLAN_TAG_SIZE);
@@ -2716,22 +2716,22 @@ eth_igb_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if(mask & ETH_VLAN_STRIP_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
igb_vlan_hw_strip_enable(dev);
else
igb_vlan_hw_strip_disable(dev);
}
- if(mask & ETH_VLAN_FILTER_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
igb_vlan_hw_filter_enable(dev);
else
igb_vlan_hw_filter_disable(dev);
}
- if(mask & ETH_VLAN_EXTEND_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
igb_vlan_hw_extend_enable(dev);
else
igb_vlan_hw_extend_disable(dev);
@@ -2883,7 +2883,7 @@ eth_igb_interrupt_action(struct rte_eth_dev *dev,
" Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id,
(unsigned)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3037,13 +3037,13 @@ eth_igb_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
rx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -3112,18 +3112,18 @@ eth_igb_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
* on configuration
*/
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
ctrl &= ~E1000_CTRL_RFCE & ~E1000_CTRL_TFCE;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
ctrl |= E1000_CTRL_RFCE;
ctrl &= ~E1000_CTRL_TFCE;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
ctrl |= E1000_CTRL_TFCE;
ctrl &= ~E1000_CTRL_RFCE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
ctrl |= E1000_CTRL_RFCE | E1000_CTRL_TFCE;
break;
default:
@@ -3271,22 +3271,22 @@ igbvf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
dev->data->port_id);
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/*
* VF has no ability to enable/disable HW CRC
* Keep the persistent behavior the same as Host PF
*/
#ifndef RTE_LIBRTE_E1000_PF_DISABLE_STRIP_CRC
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
- conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#else
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#endif
@@ -3584,16 +3584,16 @@ eth_igb_rss_reta_update(struct rte_eth_dev *dev,
uint16_t idx, shift;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IGB_4_BIT_MASK);
if (!mask)
@@ -3625,16 +3625,16 @@ eth_igb_rss_reta_query(struct rte_eth_dev *dev,
uint16_t idx, shift;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IGB_4_BIT_MASK);
if (!mask)
@@ -4407,11 +4407,11 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if (frame_size > E1000_ETH_MAX_LEN) {
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
} else {
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
rctl &= ~E1000_RCTL_LPE;
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
diff --git a/drivers/net/e1000/igb_pf.c b/drivers/net/e1000/igb_pf.c
index 2ce74dd5a9a5..fe355ef6b3b5 100644
--- a/drivers/net/e1000/igb_pf.c
+++ b/drivers/net/e1000/igb_pf.c
@@ -88,7 +88,7 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
if (*vfinfo == NULL)
rte_panic("Cannot allocate memory for private VF data\n");
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_8_POOLS;
RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712af..a57dde59dbc0 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -111,7 +111,7 @@ struct igb_rx_queue {
uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise. */
uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En. */
uint32_t flags; /**< RX flags. */
- uint64_t offloads; /**< offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_RX_OFFLOAD_* */
};
/**
@@ -185,7 +185,7 @@ struct igb_tx_queue {
/**< Start context position for transmit queue. */
struct igb_advctx_info ctx_cache[IGB_CTX_NUM];
/**< Hardware context history.*/
- uint64_t offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
};
#if 1
@@ -1456,13 +1456,13 @@ igb_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
uint64_t tx_offload_capa;
RTE_SET_USED(dev);
- tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return tx_offload_capa;
}
@@ -1635,20 +1635,20 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (hw->mac.type == e1000_i350 ||
hw->mac.type == e1000_i210 ||
hw->mac.type == e1000_i211)
- rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
return rx_offload_capa;
}
@@ -1729,7 +1729,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1963,23 +1963,23 @@ igb_hw_rss_hash_set(struct e1000_hw *hw, struct rte_eth_rss_conf *rss_conf)
/* Set configured hashing protocols in MRQC register */
rss_hf = rss_conf->rss_hf;
mrqc = E1000_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= E1000_MRQC_RSS_FIELD_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV4_TCP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6;
- if (rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP;
- if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV4_UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP;
- if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP_EX;
E1000_WRITE_REG(hw, E1000_MRQC, mrqc);
}
@@ -2045,23 +2045,23 @@ int eth_igb_rss_hash_conf_get(struct rte_eth_dev *dev,
}
rss_hf = 0;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_EX)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP_EX)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP_EX)
- rss_hf |= ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
rss_conf->rss_hf = rss_hf;
return 0;
}
@@ -2183,15 +2183,15 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
E1000_VMOLR_ROPE | E1000_VMOLR_BAM |
E1000_VMOLR_MPME);
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_UNTAG)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_UNTAG)
vmolr |= E1000_VMOLR_AUPE;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_MC)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
vmolr |= E1000_VMOLR_ROMPE;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
vmolr |= E1000_VMOLR_ROPE;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
vmolr |= E1000_VMOLR_BAM;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
vmolr |= E1000_VMOLR_MPME;
E1000_WRITE_REG(hw, E1000_VMOLR(i), vmolr);
@@ -2227,9 +2227,9 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
/* VLVF: set up filters for vlan tags as configured */
for (i = 0; i < cfg->nb_pool_maps; i++) {
/* set vlan id in VF register and set the valid bit */
- E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE | \
- (cfg->pool_map[i].vlan_id & ETH_VLAN_ID_MAX) | \
- ((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT ) & \
+ E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE |
+ (cfg->pool_map[i].vlan_id & RTE_ETH_VLAN_ID_MAX) |
+ ((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT) &
E1000_VLVF_POOLSEL_MASK)));
}
@@ -2281,7 +2281,7 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t mrqc;
- if (RTE_ETH_DEV_SRIOV(dev).active == ETH_8_POOLS) {
+ if (RTE_ETH_DEV_SRIOV(dev).active == RTE_ETH_8_POOLS) {
/*
* SRIOV active scheme
* FIXME if support RSS together with VMDq & SRIOV
@@ -2295,14 +2295,14 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
* SRIOV inactive scheme
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
igb_rss_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
/*Configure general VMDQ only RX parameters*/
igb_vmdq_rx_hw_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/* if mq_mode is none, disable rss mode.*/
default:
igb_rss_disable(dev);
@@ -2342,7 +2342,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
uint32_t max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
rctl |= E1000_RCTL_LPE;
@@ -2351,7 +2351,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
* Set maximum packet length by default, and might be updated
* together with enabling/disabling dual VLAN.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
max_len += VLAN_TAG_SIZE;
E1000_WRITE_REG(hw, E1000_RLPML, max_len);
@@ -2387,7 +2387,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -2458,7 +2458,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_RXDCTL(rxq->reg_idx), rxdctl);
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
@@ -2502,16 +2502,16 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
rxcsum |= E1000_RXCSUM_PCSD;
/* Enable both L3/L4 rx checksum offload */
- if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
rxcsum |= E1000_RXCSUM_IPOFL;
else
rxcsum &= ~E1000_RXCSUM_IPOFL;
if (rxmode->offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
rxcsum |= E1000_RXCSUM_TUOFL;
else
rxcsum &= ~E1000_RXCSUM_TUOFL;
- if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= E1000_RXCSUM_CRCOFL;
else
rxcsum &= ~E1000_RXCSUM_CRCOFL;
@@ -2519,7 +2519,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_RXCSUM, rxcsum);
/* Setup the Receive Control Register. */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
/* clear STRCRC bit in all queues */
@@ -2559,7 +2559,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
(hw->mac.mc_filter_type << E1000_RCTL_MO_SHIFT);
/* Make sure VLAN Filters are off. */
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_VMDQ_ONLY)
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY)
rctl &= ~E1000_RCTL_VFE;
/* Don't store bad packets. */
rctl &= ~E1000_RCTL_SBP;
@@ -2758,7 +2758,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_RXDCTL(i), rxdctl);
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 4cebf60a68a7..4e3ee72608f4 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -116,10 +116,10 @@ static const struct ena_stats ena_stats_rx_strings[] = {
#define ENA_STATS_ARRAY_TX ARRAY_SIZE(ena_stats_tx_strings)
#define ENA_STATS_ARRAY_RX ARRAY_SIZE(ena_stats_rx_strings)
-#define QUEUE_OFFLOADS (DEV_TX_OFFLOAD_TCP_CKSUM |\
- DEV_TX_OFFLOAD_UDP_CKSUM |\
- DEV_TX_OFFLOAD_IPV4_CKSUM |\
- DEV_TX_OFFLOAD_TCP_TSO)
+#define QUEUE_OFFLOADS (RTE_ETH_TX_OFFLOAD_TCP_CKSUM |\
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |\
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |\
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
#define MBUF_OFFLOADS (PKT_TX_L4_MASK |\
PKT_TX_IP_CKSUM |\
PKT_TX_TCP_SEG)
@@ -310,7 +310,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
(queue_offloads & QUEUE_OFFLOADS)) {
/* check if TSO is required */
if ((mbuf->ol_flags & PKT_TX_TCP_SEG) &&
- (queue_offloads & DEV_TX_OFFLOAD_TCP_TSO)) {
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
ena_tx_ctx->tso_enable = true;
ena_meta->l4_hdr_len = GET_L4_HDR_LEN(mbuf);
@@ -318,7 +318,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
/* check if L3 checksum is needed */
if ((mbuf->ol_flags & PKT_TX_IP_CKSUM) &&
- (queue_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM))
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM))
ena_tx_ctx->l3_csum_enable = true;
if (mbuf->ol_flags & PKT_TX_IPV6) {
@@ -335,12 +335,12 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
/* check if L4 checksum is needed */
if (((mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) &&
- (queue_offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) {
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP;
ena_tx_ctx->l4_csum_enable = true;
} else if (((mbuf->ol_flags & PKT_TX_L4_MASK) ==
PKT_TX_UDP_CKSUM) &&
- (queue_offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP;
ena_tx_ctx->l4_csum_enable = true;
} else {
@@ -623,9 +623,9 @@ static int ena_link_update(struct rte_eth_dev *dev,
struct rte_eth_link *link = &dev->data->dev_link;
struct ena_adapter *adapter = dev->data->dev_private;
- link->link_status = adapter->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
- link->link_speed = ETH_SPEED_NUM_NONE;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_status = adapter->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return 0;
}
@@ -684,7 +684,7 @@ static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter)
uint32_t max_frame_len = adapter->max_mtu;
if (adapter->edev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
max_frame_len =
adapter->edev_data->dev_conf.rxmode.max_rx_pkt_len;
@@ -915,7 +915,7 @@ static int ena_start(struct rte_eth_dev *dev)
if (rc)
goto err_start_tx;
- if (adapter->edev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (adapter->edev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
rc = ena_rss_configure(adapter);
if (rc)
goto err_rss_init;
@@ -1854,9 +1854,9 @@ static int ena_dev_configure(struct rte_eth_dev *dev)
adapter->state = ENA_ADAPTER_STATE_CONFIG;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
- dev->data->dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+ dev->data->dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
adapter->tx_selected_offloads = dev->data->dev_conf.txmode.offloads;
adapter->rx_selected_offloads = dev->data->dev_conf.rxmode.offloads;
@@ -1907,36 +1907,36 @@ static int ena_infos_get(struct rte_eth_dev *dev,
ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
dev_info->speed_capa =
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_5G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_5G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
/* Set Tx & Rx features available for device */
if (adapter->offloads.tso4_supported)
- tx_feat |= DEV_TX_OFFLOAD_TCP_TSO;
+ tx_feat |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (adapter->offloads.tx_csum_supported)
- tx_feat |= DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ tx_feat |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
if (adapter->offloads.rx_csum_supported)
- rx_feat |= DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM;
+ rx_feat |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
- rx_feat |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- tx_feat |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ rx_feat |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+ tx_feat |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Inform framework about available features */
dev_info->rx_offload_capa = rx_feat;
if (adapter->offloads.rss_hash_supported)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->rx_queue_offload_capa = rx_feat;
dev_info->tx_offload_capa = tx_feat;
dev_info->tx_queue_offload_capa = tx_feat;
@@ -2100,7 +2100,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
}
#endif
- fill_hash = rx_ring->offloads & DEV_RX_OFFLOAD_RSS_HASH;
+ fill_hash = rx_ring->offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH;
descs_in_use = rx_ring->ring_size -
ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1;
diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h
index 06ac8b06b5cb..3b1844e50982 100644
--- a/drivers/net/ena/ena_ethdev.h
+++ b/drivers/net/ena/ena_ethdev.h
@@ -54,8 +54,8 @@
#define ENA_HASH_KEY_SIZE 40
-#define ENA_ALL_RSS_HF (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_UDP)
+#define ENA_ALL_RSS_HF (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define ENA_IO_TXQ_IDX(q) (2 * (q))
#define ENA_IO_RXQ_IDX(q) (2 * (q) + 1)
--git a/drivers/net/ena/ena_rss.c b/drivers/net/ena/ena_rss.c
index 88afe13da04d..e7b57659491d 100644
--- a/drivers/net/ena/ena_rss.c
+++ b/drivers/net/ena/ena_rss.c
@@ -76,7 +76,7 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
if (reta_size == 0 || reta_conf == NULL)
return -EINVAL;
- if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
PMD_DRV_LOG(ERR,
"RSS was not configured for the PMD\n");
return -ENOTSUP;
@@ -93,8 +93,8 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
/* Each reta_conf is for 64 entries.
* To support 128 we use 2 conf of 64.
*/
- conf_idx = i / RTE_RETA_GROUP_SIZE;
- idx = i % RTE_RETA_GROUP_SIZE;
+ conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ idx = i % RTE_ETH_RETA_GROUP_SIZE;
if (TEST_BIT(reta_conf[conf_idx].mask, idx)) {
entry_value =
ENA_IO_RXQ_IDX(reta_conf[conf_idx].reta[idx]);
@@ -137,10 +137,10 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
int reta_idx;
if (reta_size == 0 || reta_conf == NULL ||
- (reta_size > RTE_RETA_GROUP_SIZE && ((reta_conf + 1) == NULL)))
+ (reta_size > RTE_ETH_RETA_GROUP_SIZE && ((reta_conf + 1) == NULL)))
return -EINVAL;
- if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
PMD_DRV_LOG(ERR,
"RSS was not configured for the PMD\n");
return -ENOTSUP;
@@ -155,8 +155,8 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0 ; i < reta_size ; i++) {
- reta_conf_idx = i / RTE_RETA_GROUP_SIZE;
- reta_idx = i % RTE_RETA_GROUP_SIZE;
+ reta_conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
if (TEST_BIT(reta_conf[reta_conf_idx].mask, reta_idx))
reta_conf[reta_conf_idx].reta[reta_idx] =
ENA_IO_RXQ_IDX_REV(indirect_table[i]);
@@ -200,34 +200,34 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
/* Convert proto to ETH flag */
switch (proto) {
case ENA_ADMIN_RSS_TCP4:
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
break;
case ENA_ADMIN_RSS_UDP4:
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
break;
case ENA_ADMIN_RSS_TCP6:
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
break;
case ENA_ADMIN_RSS_UDP6:
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
break;
case ENA_ADMIN_RSS_IP4:
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
break;
case ENA_ADMIN_RSS_IP6:
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
break;
case ENA_ADMIN_RSS_IP4_FRAG:
- rss_hf |= ETH_RSS_FRAG_IPV4;
+ rss_hf |= RTE_ETH_RSS_FRAG_IPV4;
break;
case ENA_ADMIN_RSS_NOT_IP:
- rss_hf |= ETH_RSS_L2_PAYLOAD;
+ rss_hf |= RTE_ETH_RSS_L2_PAYLOAD;
break;
case ENA_ADMIN_RSS_TCP6_EX:
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
break;
case ENA_ADMIN_RSS_IP6_EX:
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
break;
default:
break;
@@ -236,10 +236,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
/* Check if only DA or SA is being used for L3. */
switch (fields & ENA_HF_RSS_ALL_L3) {
case ENA_ADMIN_RSS_L3_SA:
- rss_hf |= ETH_RSS_L3_SRC_ONLY;
+ rss_hf |= RTE_ETH_RSS_L3_SRC_ONLY;
break;
case ENA_ADMIN_RSS_L3_DA:
- rss_hf |= ETH_RSS_L3_DST_ONLY;
+ rss_hf |= RTE_ETH_RSS_L3_DST_ONLY;
break;
default:
break;
@@ -248,10 +248,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
/* Check if only DA or SA is being used for L4. */
switch (fields & ENA_HF_RSS_ALL_L4) {
case ENA_ADMIN_RSS_L4_SP:
- rss_hf |= ETH_RSS_L4_SRC_ONLY;
+ rss_hf |= RTE_ETH_RSS_L4_SRC_ONLY;
break;
case ENA_ADMIN_RSS_L4_DP:
- rss_hf |= ETH_RSS_L4_DST_ONLY;
+ rss_hf |= RTE_ETH_RSS_L4_DST_ONLY;
break;
default:
break;
@@ -269,11 +269,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
fields_mask = ENA_ADMIN_RSS_L2_DA | ENA_ADMIN_RSS_L2_SA;
/* Determine which fields of L3 should be used. */
- switch (rss_hf & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) {
- case ETH_RSS_L3_DST_ONLY:
+ switch (rss_hf & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) {
+ case RTE_ETH_RSS_L3_DST_ONLY:
fields_mask |= ENA_ADMIN_RSS_L3_DA;
break;
- case ETH_RSS_L3_SRC_ONLY:
+ case RTE_ETH_RSS_L3_SRC_ONLY:
fields_mask |= ENA_ADMIN_RSS_L3_SA;
break;
default:
@@ -285,11 +285,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
}
/* Determine which fields of L4 should be used. */
- switch (rss_hf & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) {
- case ETH_RSS_L4_DST_ONLY:
+ switch (rss_hf & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) {
+ case RTE_ETH_RSS_L4_DST_ONLY:
fields_mask |= ENA_ADMIN_RSS_L4_DP;
break;
- case ETH_RSS_L4_SRC_ONLY:
+ case RTE_ETH_RSS_L4_SRC_ONLY:
fields_mask |= ENA_ADMIN_RSS_L4_SP;
break;
default:
@@ -335,43 +335,43 @@ static int ena_set_hash_fields(struct ena_com_dev *ena_dev, uint64_t rss_hf)
int rc, i;
/* Turn on appropriate fields for each requested packet type */
- if ((rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) != 0)
selected_fields[ENA_ADMIN_RSS_TCP4].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP4, rss_hf);
- if ((rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) != 0)
selected_fields[ENA_ADMIN_RSS_UDP4].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP4, rss_hf);
- if ((rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) != 0)
selected_fields[ENA_ADMIN_RSS_TCP6].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6, rss_hf);
- if ((rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) != 0)
selected_fields[ENA_ADMIN_RSS_UDP6].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP6, rss_hf);
- if ((rss_hf & ETH_RSS_IPV4) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV4) != 0)
selected_fields[ENA_ADMIN_RSS_IP4].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4, rss_hf);
- if ((rss_hf & ETH_RSS_IPV6) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV6) != 0)
selected_fields[ENA_ADMIN_RSS_IP6].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6, rss_hf);
- if ((rss_hf & ETH_RSS_FRAG_IPV4) != 0)
+ if ((rss_hf & RTE_ETH_RSS_FRAG_IPV4) != 0)
selected_fields[ENA_ADMIN_RSS_IP4_FRAG].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4_FRAG, rss_hf);
- if ((rss_hf & ETH_RSS_L2_PAYLOAD) != 0)
+ if ((rss_hf & RTE_ETH_RSS_L2_PAYLOAD) != 0)
selected_fields[ENA_ADMIN_RSS_NOT_IP].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_NOT_IP, rss_hf);
- if ((rss_hf & ETH_RSS_IPV6_TCP_EX) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) != 0)
selected_fields[ENA_ADMIN_RSS_TCP6_EX].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6_EX, rss_hf);
- if ((rss_hf & ETH_RSS_IPV6_EX) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV6_EX) != 0)
selected_fields[ENA_ADMIN_RSS_IP6_EX].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6_EX, rss_hf);
@@ -542,7 +542,7 @@ int ena_rss_hash_conf_get(struct rte_eth_dev *dev,
uint16_t admin_hf;
static bool warn_once;
- if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
PMD_DRV_LOG(ERR, "RSS was not configured for the PMD\n");
return -ENOTSUP;
}
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index b496cd470045..e0fb44edeb41 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -100,27 +100,27 @@ enetc_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
status = enetc_port_rd(enetc_hw, ENETC_PM0_STATUS);
if (status & ENETC_LINK_MODE)
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
else
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
if (status & ENETC_LINK_STATUS)
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
else
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
switch (status & ENETC_LINK_SPEED_MASK) {
case ENETC_LINK_SPEED_1G:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case ENETC_LINK_SPEED_100M:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
default:
case ENETC_LINK_SPEED_10M:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -207,11 +207,11 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
dev_info->max_tx_queues = MAX_TX_RINGS;
dev_info->max_rx_pktlen = ENETC_MAC_MAXFRM_SIZE;
dev_info->rx_offload_capa =
- (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME);
+ (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME);
return 0;
}
@@ -462,7 +462,7 @@ enetc_rx_queue_setup(struct rte_eth_dev *dev,
RTE_ETH_QUEUE_STATE_STOPPED;
}
- rx_ring->crc_len = (uint8_t)((rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+ rx_ring->crc_len = (uint8_t)((rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
RTE_ETHER_CRC_LEN : 0);
return 0;
@@ -679,10 +679,10 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (frame_size > ENETC_ETH_MAX_LEN)
dev->data->dev_conf.rxmode.offloads &=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
@@ -708,7 +708,7 @@ enetc_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
uint32_t max_len;
max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
@@ -723,7 +723,7 @@ enetc_dev_configure(struct rte_eth_dev *dev)
RTE_ETHER_CRC_LEN;
}
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
int config;
config = enetc_port_rd(enetc_hw, ENETC_PM0_CMD_CFG);
@@ -731,10 +731,10 @@ enetc_dev_configure(struct rte_eth_dev *dev)
enetc_port_wr(enetc_hw, ENETC_PM0_CMD_CFG, config);
}
- if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
checksum &= ~L3_CKSUM;
- if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM))
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
checksum &= ~L4_CKSUM;
enetc_port_wr(enetc_hw, ENETC_PAR_PORT_CFG, checksum);
diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index 47bfdac2cfdd..d5493c98345d 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -178,7 +178,7 @@ struct enic {
*/
uint8_t rss_hash_type; /* NIC_CFG_RSS_HASH_TYPE flags */
uint8_t rss_enable;
- uint64_t rss_hf; /* ETH_RSS flags */
+ uint64_t rss_hf; /* RTE_ETH_RSS flags */
union vnic_rss_key rss_key;
union vnic_rss_cpu rss_cpu;
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b8f..30cd1d4f5dd1 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -38,30 +38,30 @@ static const struct vic_speed_capa {
uint16_t sub_devid;
uint32_t capa;
} vic_speed_capa_map[] = {
- { 0x0043, ETH_LINK_SPEED_10G }, /* VIC */
- { 0x0047, ETH_LINK_SPEED_10G }, /* P81E PCIe */
- { 0x0048, ETH_LINK_SPEED_10G }, /* M81KR Mezz */
- { 0x004f, ETH_LINK_SPEED_10G }, /* 1280 Mezz */
- { 0x0084, ETH_LINK_SPEED_10G }, /* 1240 MLOM */
- { 0x0085, ETH_LINK_SPEED_10G }, /* 1225 PCIe */
- { 0x00cd, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1285 PCIe */
- { 0x00ce, ETH_LINK_SPEED_10G }, /* 1225T PCIe */
- { 0x012a, ETH_LINK_SPEED_40G }, /* M4308 */
- { 0x012c, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1340 MLOM */
- { 0x012e, ETH_LINK_SPEED_10G }, /* 1227 PCIe */
- { 0x0137, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1380 Mezz */
- { 0x014d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1385 PCIe */
- { 0x015d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1387 MLOM */
- { 0x0215, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G }, /* 1440 Mezz */
- { 0x0216, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G }, /* 1480 MLOM */
- { 0x0217, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1455 PCIe */
- { 0x0218, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1457 MLOM */
- { 0x0219, ETH_LINK_SPEED_40G }, /* 1485 PCIe */
- { 0x021a, ETH_LINK_SPEED_40G }, /* 1487 MLOM */
- { 0x024a, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1495 PCIe */
- { 0x024b, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1497 MLOM */
+ { 0x0043, RTE_ETH_LINK_SPEED_10G }, /* VIC */
+ { 0x0047, RTE_ETH_LINK_SPEED_10G }, /* P81E PCIe */
+ { 0x0048, RTE_ETH_LINK_SPEED_10G }, /* M81KR Mezz */
+ { 0x004f, RTE_ETH_LINK_SPEED_10G }, /* 1280 Mezz */
+ { 0x0084, RTE_ETH_LINK_SPEED_10G }, /* 1240 MLOM */
+ { 0x0085, RTE_ETH_LINK_SPEED_10G }, /* 1225 PCIe */
+ { 0x00cd, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1285 PCIe */
+ { 0x00ce, RTE_ETH_LINK_SPEED_10G }, /* 1225T PCIe */
+ { 0x012a, RTE_ETH_LINK_SPEED_40G }, /* M4308 */
+ { 0x012c, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1340 MLOM */
+ { 0x012e, RTE_ETH_LINK_SPEED_10G }, /* 1227 PCIe */
+ { 0x0137, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1380 Mezz */
+ { 0x014d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1385 PCIe */
+ { 0x015d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1387 MLOM */
+ { 0x0215, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G }, /* 1440 Mezz */
+ { 0x0216, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G }, /* 1480 MLOM */
+ { 0x0217, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1455 PCIe */
+ { 0x0218, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1457 MLOM */
+ { 0x0219, RTE_ETH_LINK_SPEED_40G }, /* 1485 PCIe */
+ { 0x021a, RTE_ETH_LINK_SPEED_40G }, /* 1487 MLOM */
+ { 0x024a, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1495 PCIe */
+ { 0x024b, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1497 MLOM */
{ 0, 0 }, /* End marker */
};
@@ -293,8 +293,8 @@ static int enicpmd_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
ENICPMD_FUNC_TRACE();
offloads = eth_dev->data->dev_conf.rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
enic->ig_vlan_strip_en = 1;
else
enic->ig_vlan_strip_en = 0;
@@ -319,17 +319,17 @@ static int enicpmd_dev_configure(struct rte_eth_dev *eth_dev)
return ret;
}
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
enic->mc_count = 0;
enic->hw_ip_checksum = !!(eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_CHECKSUM);
+ RTE_ETH_RX_OFFLOAD_CHECKSUM);
/* All vlan offload masks to apply the current settings */
- mask = ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = enicpmd_vlan_offload_set(eth_dev, mask);
if (ret) {
dev_err(enic, "Failed to configure VLAN offloads\n");
@@ -431,14 +431,14 @@ static uint32_t speed_capa_from_pci_id(struct rte_eth_dev *eth_dev)
}
/* 1300 and later models are at least 40G */
if (id >= 0x0100)
- return ETH_LINK_SPEED_40G;
+ return RTE_ETH_LINK_SPEED_40G;
/* VFs have subsystem id 0, check device id */
if (id == 0) {
/* Newer VF implies at least 40G model */
if (pdev->id.device_id == PCI_DEVICE_ID_CISCO_VIC_ENET_SN)
- return ETH_LINK_SPEED_40G;
+ return RTE_ETH_LINK_SPEED_40G;
}
- return ETH_LINK_SPEED_10G;
+ return RTE_ETH_LINK_SPEED_10G;
}
static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
@@ -770,8 +770,8 @@ static int enicpmd_dev_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = enic_sop_rq_idx_to_rte_idx(
enic->rss_cpu.cpu[i / 4].b[i % 4]);
@@ -802,8 +802,8 @@ static int enicpmd_dev_rss_reta_update(struct rte_eth_dev *dev,
*/
rss_cpu = enic->rss_cpu;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
rss_cpu.cpu[i / 4].b[i % 4] =
enic_rte_rq_idx_to_sop_idx(
@@ -879,7 +879,7 @@ static void enicpmd_dev_rxq_info_get(struct rte_eth_dev *dev,
*/
conf->offloads = enic->rx_offload_capa;
if (!enic->ig_vlan_strip_en)
- conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* rx_thresh and other fields are not applicable for enic */
}
@@ -965,8 +965,8 @@ static int enicpmd_dev_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
static int udp_tunnel_common_check(struct enic *enic,
struct rte_eth_udp_tunnel *tnl)
{
- if (tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN &&
- tnl->prot_type != RTE_TUNNEL_TYPE_GENEVE)
+ if (tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN &&
+ tnl->prot_type != RTE_ETH_TUNNEL_TYPE_GENEVE)
return -ENOTSUP;
if (!enic->overlay_offload) {
ENICPMD_LOG(DEBUG, " overlay offload is not supported\n");
@@ -1006,7 +1006,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
ret = udp_tunnel_common_check(enic, tnl);
if (ret)
return ret;
- vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+ vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
if (vxlan)
port = enic->vxlan_port;
else
@@ -1035,7 +1035,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
ret = udp_tunnel_common_check(enic, tnl);
if (ret)
return ret;
- vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+ vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
if (vxlan)
port = enic->vxlan_port;
else
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6a4..754cf362c6d8 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -430,7 +430,7 @@ int enic_link_update(struct rte_eth_dev *eth_dev)
memset(&link, 0, sizeof(link));
link.link_status = enic_get_link_status(enic);
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_speed = vnic_dev_port_speed(enic->vdev);
return rte_eth_linkstatus_set(eth_dev, &link);
@@ -597,7 +597,7 @@ int enic_enable(struct enic *enic)
}
eth_dev->data->dev_link.link_speed = vnic_dev_port_speed(enic->vdev);
- eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
/* vnic notification of link status has already been turned on in
* enic_dev_init() which is called during probe time. Here we are
@@ -638,11 +638,11 @@ int enic_enable(struct enic *enic)
* and vlan insertion are supported.
*/
simple_tx_offloads = enic->tx_offload_capa &
- (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM);
+ (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
if ((eth_dev->data->dev_conf.txmode.offloads &
~simple_tx_offloads) == 0) {
ENICPMD_LOG(DEBUG, " use the simple tx handler");
@@ -858,7 +858,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
if (enic->rte_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SCATTER) {
+ RTE_ETH_RX_OFFLOAD_SCATTER) {
dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
/* ceil((max pkt len)/mbuf_size) */
mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) / mbuf_size;
@@ -1386,15 +1386,15 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
rss_hash_type = 0;
rss_hf = rss_conf->rss_hf & enic->flow_type_rss_offloads;
if (enic->rq_count > 1 &&
- (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) &&
+ (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) &&
rss_hf != 0) {
rss_enable = 1;
- if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER))
+ if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER))
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV4;
if (enic->udp_rss_weak) {
/*
@@ -1405,12 +1405,12 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
}
}
- if (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_IPV6_EX |
- ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER))
+ if (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_IPV6_EX |
+ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER))
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV6;
- if (rss_hf & (ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX))
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX))
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
- if (rss_hf & (ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX)) {
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX)) {
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV6;
if (enic->udp_rss_weak)
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
@@ -1751,9 +1751,9 @@ enic_enable_overlay_offload(struct enic *enic)
return -EINVAL;
}
enic->tx_offload_capa |=
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- (enic->geneve ? DEV_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
- (enic->vxlan ? DEV_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ (enic->geneve ? RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
+ (enic->vxlan ? RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
enic->tx_offload_mask |=
PKT_TX_OUTER_IPV6 |
PKT_TX_OUTER_IPV4 |
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index a8f5332a407f..12f734260ca5 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -147,31 +147,31 @@ int enic_get_vnic_config(struct enic *enic)
* IPV4 hash type handles both non-frag and frag packet types.
* TCP/UDP is controlled via a separate flag below.
*/
- enic->flow_type_rss_offloads |= ETH_RSS_IPV4 |
- ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER;
if (ENIC_SETTING(enic, RSSHASH_TCPIPV4))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_TCP;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (ENIC_SETTING(enic, RSSHASH_IPV6))
/*
* The VIC adapter can perform RSS on IPv6 packets with and
* without extension headers. An IPv6 "fragment" is an IPv6
* packet with the fragment extension header.
*/
- enic->flow_type_rss_offloads |= ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_OTHER;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER;
if (ENIC_SETTING(enic, RSSHASH_TCPIPV6))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX;
if (enic->udp_rss_weak)
enic->flow_type_rss_offloads |=
- ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
if (ENIC_SETTING(enic, RSSHASH_UDPIPV4))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_UDP;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (ENIC_SETTING(enic, RSSHASH_UDPIPV6))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
/* Zero offloads if RSS is not enabled */
if (!ENIC_SETTING(enic, RSS))
@@ -201,20 +201,20 @@ int enic_get_vnic_config(struct enic *enic)
enic->tx_queue_offload_capa = 0;
enic->tx_offload_capa =
enic->tx_queue_offload_capa |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
enic->rx_offload_capa =
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
enic->tx_offload_mask =
PKT_TX_IPV6 |
PKT_TX_IPV4 |
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index 8216063a3d8b..9b22a6ce8941 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -17,10 +17,10 @@
const char pmd_failsafe_driver_name[] = FAILSAFE_DRIVER_NAME;
static const struct rte_eth_link eth_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_UP,
- .link_autoneg = ETH_LINK_AUTONEG,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_UP,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
};
static int
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 602c04033c18..5f4810051dac 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -326,7 +326,7 @@ int failsafe_rx_intr_install_subdevice(struct sub_device *sdev)
int qid;
struct rte_eth_dev *fsdev;
struct rxq **rxq;
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð(sdev)->data->dev_conf.intr_conf;
fsdev = fs_dev(sdev);
@@ -519,7 +519,7 @@ int
failsafe_rx_intr_install(struct rte_eth_dev *dev)
{
struct fs_priv *priv = PRIV(dev);
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
&priv->data->dev_conf.intr_conf;
if (intr_conf->rxq == 0 || dev->intr_handle != NULL)
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 5ff33e03e034..8cb215651df8 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1182,53 +1182,53 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
* configuring a sub-device.
*/
infos->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_MACSEC_STRIP |
- DEV_RX_OFFLOAD_HEADER_SPLIT |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_SECURITY |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+ RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+ RTE_ETH_RX_OFFLOAD_SECURITY |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
infos->rx_queue_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_MACSEC_STRIP |
- DEV_RX_OFFLOAD_HEADER_SPLIT |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_SECURITY |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+ RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+ RTE_ETH_RX_OFFLOAD_SECURITY |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
infos->tx_offload_capa =
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
infos->flow_type_rss_offloads =
- ETH_RSS_IP |
- ETH_RSS_UDP |
- ETH_RSS_TCP;
+ RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP;
infos->dev_capa =
RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 916b856acc4b..7af115399e0f 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -177,7 +177,7 @@ struct fm10k_rx_queue {
uint8_t drop_en;
uint8_t rx_deferred_start; /* don't start this queue in dev start. */
uint16_t rx_ftag_en; /* indicates FTAG RX supported */
- uint64_t offloads; /* offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /* offloads of RTE_ETH_RX_OFFLOAD_* */
};
/*
@@ -209,7 +209,7 @@ struct fm10k_tx_queue {
uint16_t next_rs; /* Next pos to set RS flag */
uint16_t next_dd; /* Next pos to check DD flag */
volatile uint32_t *tail_ptr;
- uint64_t offloads; /* Offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /* Offloads of RTE_ETH_TX_OFFLOAD_* */
uint16_t nb_desc;
uint16_t port_id;
uint8_t tx_deferred_start; /** don't start this queue in dev start. */
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 3236290e4021..e77cfa3f9882 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -413,12 +413,12 @@ fm10k_check_mq_mode(struct rte_eth_dev *dev)
vmdq_conf = &dev->data->dev_conf.rx_adv_conf.vmdq_rx_conf;
- if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
PMD_INIT_LOG(ERR, "DCB mode is not supported.");
return -EINVAL;
}
- if (!(rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+ if (!(rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
return 0;
if (hw->mac.type == fm10k_mac_vf) {
@@ -449,8 +449,8 @@ fm10k_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multipe queue mode checking */
ret = fm10k_check_mq_mode(dev);
@@ -510,7 +510,7 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
};
- if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS ||
+ if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_RSS ||
dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) {
FM10K_WRITE_REG(hw, FM10K_MRQC(0), 0);
return;
@@ -547,15 +547,15 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
*/
hf = dev_conf->rx_adv_conf.rss_conf.rss_hf;
mrqc = 0;
- mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
if (mrqc == 0) {
PMD_INIT_LOG(ERR, "Specified RSS mode 0x%"PRIx64"is not"
@@ -602,7 +602,7 @@ fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev)
if (hw->mac.type != fm10k_mac_pf)
return;
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
nb_queue_pools = vmdq_conf->nb_queue_pools;
/* no pool number change, no need to update logic port and VLAN/MAC */
@@ -759,7 +759,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
/* It adds dual VLAN length for supporting dual VLAN */
if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
- rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
+ rxq->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
uint32_t reg;
dev->data->scattered_rx = 1;
reg = FM10K_READ_REG(hw, FM10K_SRRCTL(i));
@@ -1145,7 +1145,7 @@ fm10k_dev_start(struct rte_eth_dev *dev)
}
/* Update default vlan when not in VMDQ mode */
- if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+ if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
fm10k_vlan_filter_set(dev, hw->mac.default_vid, true);
fm10k_link_update(dev, 0);
@@ -1222,11 +1222,11 @@ fm10k_link_update(struct rte_eth_dev *dev,
FM10K_DEV_PRIVATE_TO_INFO(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_50G;
- dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+ dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
dev->data->dev_link.link_status =
- dev_info->sm_down ? ETH_LINK_DOWN : ETH_LINK_UP;
- dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
+ dev_info->sm_down ? RTE_ETH_LINK_DOWN : RTE_ETH_LINK_UP;
+ dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
return 0;
}
@@ -1378,7 +1378,7 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
dev_info->max_vfs = pdev->max_vfs;
dev_info->vmdq_pool_base = 0;
dev_info->vmdq_queue_base = 0;
- dev_info->max_vmdq_pools = ETH_32_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_32_POOLS;
dev_info->vmdq_queue_num = FM10K_MAX_QUEUES_PF;
dev_info->rx_queue_offload_capa = fm10k_get_rx_queue_offloads_capa(dev);
dev_info->rx_offload_capa = fm10k_get_rx_port_offloads_capa(dev) |
@@ -1389,15 +1389,15 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
dev_info->hash_key_size = FM10K_RSSRK_SIZE * sizeof(uint32_t);
dev_info->reta_size = FM10K_MAX_RSS_INDICES;
- dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
- ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -1435,9 +1435,9 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
.nb_mtu_seg_max = FM10K_TX_MAX_MTU_SEG,
};
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -1509,7 +1509,7 @@ fm10k_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
return -EINVAL;
}
- if (vlan_id > ETH_VLAN_ID_MAX) {
+ if (vlan_id > RTE_ETH_VLAN_ID_MAX) {
PMD_INIT_LOG(ERR, "Invalid vlan_id: must be < 4096");
return -EINVAL;
}
@@ -1767,21 +1767,21 @@ static uint64_t fm10k_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
{
RTE_SET_USED(dev);
- return (uint64_t)(DEV_RX_OFFLOAD_SCATTER);
+ return (uint64_t)(RTE_ETH_RX_OFFLOAD_SCATTER);
}
static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
{
RTE_SET_USED(dev);
- return (uint64_t)(DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_HEADER_SPLIT |
- DEV_RX_OFFLOAD_RSS_HASH);
+ return (uint64_t)(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
}
static int
@@ -1966,12 +1966,12 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
{
RTE_SET_USED(dev);
- return (uint64_t)(DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO);
+ return (uint64_t)(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO);
}
static int
@@ -2112,8 +2112,8 @@ fm10k_reta_update(struct rte_eth_dev *dev,
* 128-entries in 32 registers
*/
for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
BIT_MASK_PER_UINT32);
if (mask == 0)
@@ -2161,8 +2161,8 @@ fm10k_reta_query(struct rte_eth_dev *dev,
* 128-entries in 32 registers
*/
for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
BIT_MASK_PER_UINT32);
if (mask == 0)
@@ -2199,15 +2199,15 @@ fm10k_rss_hash_update(struct rte_eth_dev *dev,
return -EINVAL;
mrqc = 0;
- mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
/* If the mapping doesn't fit any supported, return */
if (mrqc == 0)
@@ -2244,15 +2244,15 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev,
mrqc = FM10K_READ_REG(hw, FM10K_MRQC(0));
hf = 0;
- hf |= (mrqc & FM10K_MRQC_IPV4) ? ETH_RSS_IPV4 : 0;
- hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6 : 0;
- hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6_EX : 0;
- hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? ETH_RSS_NONFRAG_IPV4_TCP : 0;
- hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_NONFRAG_IPV6_TCP : 0;
- hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP_EX : 0;
- hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? ETH_RSS_NONFRAG_IPV4_UDP : 0;
- hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_NONFRAG_IPV6_UDP : 0;
- hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP_EX : 0;
+ hf |= (mrqc & FM10K_MRQC_IPV4) ? RTE_ETH_RSS_IPV4 : 0;
+ hf |= (mrqc & FM10K_MRQC_IPV6) ? RTE_ETH_RSS_IPV6 : 0;
+ hf |= (mrqc & FM10K_MRQC_IPV6) ? RTE_ETH_RSS_IPV6_EX : 0;
+ hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_TCP : 0;
+ hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0;
+ hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+ hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0;
+ hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0;
+ hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_IPV6_UDP_EX : 0;
rss_conf->rss_hf = hf;
@@ -2607,7 +2607,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
/* first clear the internal SW recording structure */
if (!(dev->data->dev_conf.rxmode.mq_mode &
- ETH_MQ_RX_VMDQ_FLAG))
+ RTE_ETH_MQ_RX_VMDQ_FLAG))
fm10k_vlan_filter_set(dev, hw->mac.default_vid,
false);
@@ -2623,7 +2623,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
MAIN_VSI_POOL_NUMBER);
if (!(dev->data->dev_conf.rxmode.mq_mode &
- ETH_MQ_RX_VMDQ_FLAG))
+ RTE_ETH_MQ_RX_VMDQ_FLAG))
fm10k_vlan_filter_set(dev, hw->mac.default_vid,
true);
diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c
index 83af01dc2da6..50973a662c67 100644
--- a/drivers/net/fm10k/fm10k_rxtx_vec.c
+++ b/drivers/net/fm10k/fm10k_rxtx_vec.c
@@ -208,11 +208,11 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
{
#ifndef RTE_LIBRTE_IEEE1588
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
#ifndef RTE_FM10K_RX_OLFLAGS_ENABLE
/* whithout rx ol_flags, no VP flag report */
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
return -1;
#endif
@@ -221,7 +221,7 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
return -1;
/* no header split support */
- if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
return -1;
return 0;
diff --git a/drivers/net/hinic/base/hinic_pmd_hwdev.c b/drivers/net/hinic/base/hinic_pmd_hwdev.c
index cb9cf6efa287..80f9eb5c3031 100644
--- a/drivers/net/hinic/base/hinic_pmd_hwdev.c
+++ b/drivers/net/hinic/base/hinic_pmd_hwdev.c
@@ -1320,28 +1320,28 @@ hinic_cable_status_event(u8 cmd, void *buf_in, __rte_unused u16 in_size,
static int hinic_link_event_process(struct hinic_hwdev *hwdev,
struct rte_eth_dev *eth_dev, u8 status)
{
- uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
- ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
- ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
- ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+ uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+ RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+ RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+ RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
struct nic_port_info port_info;
struct rte_eth_link link;
int rc = HINIC_OK;
if (!status) {
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
link.link_speed = 0;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
} else {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
memset(&port_info, 0, sizeof(port_info));
rc = hinic_get_port_info(hwdev, &port_info);
if (rc) {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
} else {
link.link_speed = port_speed[port_info.speed %
LINK_SPEED_MAX];
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 1a7240154668..105c0f48a616 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -311,8 +311,8 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* mtu size is 256~9600 */
if (dev->data->dev_conf.rxmode.max_rx_pkt_len < HINIC_MIN_FRAME_SIZE ||
@@ -338,7 +338,7 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
/* init vlan offoad */
err = hinic_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
if (err) {
PMD_DRV_LOG(ERR, "Initialize vlan filter and strip failed");
(void)hinic_config_mq_mode(dev, FALSE);
@@ -696,15 +696,15 @@ static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
} else {
*speed_capa = 0;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_1G))
- *speed_capa |= ETH_LINK_SPEED_1G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_10G))
- *speed_capa |= ETH_LINK_SPEED_10G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_25G))
- *speed_capa |= ETH_LINK_SPEED_25G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_40G))
- *speed_capa |= ETH_LINK_SPEED_40G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_100G))
- *speed_capa |= ETH_LINK_SPEED_100G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_100G;
}
}
@@ -732,25 +732,25 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
hinic_get_speed_capa(dev, &info->speed_capa);
info->rx_queue_offload_capa = 0;
- info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_RSS_HASH;
+ info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
info->tx_queue_offload_capa = 0;
- info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
info->hash_key_size = HINIC_RSS_KEY_SIZE;
info->reta_size = HINIC_RSS_INDIR_SIZE;
@@ -847,20 +847,20 @@ static int hinic_priv_get_dev_link_status(struct hinic_nic_dev *nic_dev,
u8 port_link_status = 0;
struct nic_port_info port_link_info;
struct hinic_hwdev *nic_hwdev = nic_dev->hwdev;
- uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
- ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
- ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
- ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+ uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+ RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+ RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+ RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
rc = hinic_get_link_status(nic_hwdev, &port_link_status);
if (rc)
return rc;
if (!port_link_status) {
- link->link_status = ETH_LINK_DOWN;
+ link->link_status = RTE_ETH_LINK_DOWN;
link->link_speed = 0;
- link->link_duplex = ETH_LINK_HALF_DUPLEX;
- link->link_autoneg = ETH_LINK_FIXED;
+ link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link->link_autoneg = RTE_ETH_LINK_FIXED;
return HINIC_OK;
}
@@ -902,8 +902,8 @@ static int hinic_link_update(struct rte_eth_dev *dev, int wait_to_complete)
/* Get link status information from hardware */
rc = hinic_priv_get_dev_link_status(nic_dev, &link);
if (rc != HINIC_OK) {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR, "Get link status failed");
goto out;
}
@@ -1552,10 +1552,10 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
frame_size = HINIC_MTU_TO_PKTLEN(mtu);
if (frame_size > HINIC_ETH_MAX_LEN)
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
nic_dev->mtu_size = mtu;
@@ -1664,8 +1664,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
int err;
/* Enable or disable VLAN filter */
- if (mask & ETH_VLAN_FILTER_MASK) {
- on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) ?
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) ?
TRUE : FALSE;
err = hinic_config_vlan_filter(nic_dev->hwdev, on);
if (err == HINIC_MGMT_CMD_UNSUPPORTED) {
@@ -1686,8 +1686,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
/* Enable or disable VLAN stripping */
- if (mask & ETH_VLAN_STRIP_MASK) {
- on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) ?
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) ?
TRUE : FALSE;
err = hinic_set_rx_vlan_offload(nic_dev->hwdev, on);
if (err) {
@@ -1873,13 +1873,13 @@ static int hinic_flow_ctrl_get(struct rte_eth_dev *dev,
fc_conf->autoneg = nic_pause.auto_neg;
if (nic_pause.tx_pause && nic_pause.rx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (nic_pause.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else if (nic_pause.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -1893,14 +1893,14 @@ static int hinic_flow_ctrl_set(struct rte_eth_dev *dev,
nic_pause.auto_neg = fc_conf->autoneg;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_TX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
nic_pause.tx_pause = true;
else
nic_pause.tx_pause = false;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_RX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
nic_pause.rx_pause = true;
else
nic_pause.rx_pause = false;
@@ -1944,7 +1944,7 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
struct nic_rss_type rss_type = {0};
int err = 0;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
PMD_DRV_LOG(WARNING, "RSS is not enabled");
return HINIC_OK;
}
@@ -1965,14 +1965,14 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
}
}
- rss_type.ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
- rss_type.tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
- rss_type.ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
- rss_type.ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
- rss_type.tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
- rss_type.tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
- rss_type.udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
- rss_type.udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+ rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+ rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+ rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+ rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+ rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+ rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+ rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+ rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
err = hinic_set_rss_type(nic_dev->hwdev, tmpl_idx, rss_type);
if (err) {
@@ -2008,7 +2008,7 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
struct nic_rss_type rss_type = {0};
int err;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
PMD_DRV_LOG(WARNING, "RSS is not enabled");
return HINIC_ERROR;
}
@@ -2029,15 +2029,15 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
rss_conf->rss_hf = 0;
rss_conf->rss_hf |= rss_type.ipv4 ?
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4) : 0;
- rss_conf->rss_hf |= rss_type.tcp_ipv4 ? ETH_RSS_NONFRAG_IPV4_TCP : 0;
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4) : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_TCP : 0;
rss_conf->rss_hf |= rss_type.ipv6 ?
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6) : 0;
- rss_conf->rss_hf |= rss_type.ipv6_ext ? ETH_RSS_IPV6_EX : 0;
- rss_conf->rss_hf |= rss_type.tcp_ipv6 ? ETH_RSS_NONFRAG_IPV6_TCP : 0;
- rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? ETH_RSS_IPV6_TCP_EX : 0;
- rss_conf->rss_hf |= rss_type.udp_ipv4 ? ETH_RSS_NONFRAG_IPV4_UDP : 0;
- rss_conf->rss_hf |= rss_type.udp_ipv6 ? ETH_RSS_NONFRAG_IPV6_UDP : 0;
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6) : 0;
+ rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+ rss_conf->rss_hf |= rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0;
+ rss_conf->rss_hf |= rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0;
return HINIC_OK;
}
@@ -2067,7 +2067,7 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
u16 i = 0;
u16 idx, shift;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG))
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG))
return HINIC_OK;
if (reta_size != NIC_RSS_INDIR_SIZE) {
@@ -2081,8 +2081,8 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
/* update rss indir_tbl */
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].reta[shift] >= nic_dev->num_rq) {
PMD_DRV_LOG(ERR, "Invalid reta entry, indirtbl[%d]: %d "
@@ -2147,8 +2147,8 @@ static int hinic_rss_indirtbl_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = (uint16_t)indirtbl[i];
}
diff --git a/drivers/net/hinic/hinic_pmd_rx.c b/drivers/net/hinic/hinic_pmd_rx.c
index 842399cc4cd8..d347afe9a6a9 100644
--- a/drivers/net/hinic/hinic_pmd_rx.c
+++ b/drivers/net/hinic/hinic_pmd_rx.c
@@ -504,14 +504,14 @@ static void hinic_fill_rss_type(struct nic_rss_type *rss_type,
{
u64 rss_hf = rss_conf->rss_hf;
- rss_type->ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
- rss_type->tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
- rss_type->ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
- rss_type->ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
- rss_type->tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
- rss_type->tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
- rss_type->udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
- rss_type->udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+ rss_type->ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+ rss_type->tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+ rss_type->ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+ rss_type->ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+ rss_type->tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+ rss_type->tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+ rss_type->udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+ rss_type->udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
}
static void hinic_fillout_indir_tbl(struct hinic_nic_dev *nic_dev, u32 *indir)
@@ -588,8 +588,8 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
{
int err, i;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
- nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
+ nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
nic_dev->num_rss = 0;
if (nic_dev->num_rq > 1) {
/* get rss template id */
@@ -599,7 +599,7 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
PMD_DRV_LOG(WARNING, "Alloc rss template failed");
return err;
}
- nic_dev->flags |= ETH_MQ_RX_RSS_FLAG;
+ nic_dev->flags |= RTE_ETH_MQ_RX_RSS_FLAG;
for (i = 0; i < nic_dev->num_rq; i++)
hinic_add_rq_to_rx_queue_list(nic_dev, i);
}
@@ -610,12 +610,12 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
static void hinic_destroy_num_qps(struct hinic_nic_dev *nic_dev)
{
- if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+ if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
if (hinic_rss_template_free(nic_dev->hwdev,
nic_dev->rss_tmpl_idx))
PMD_DRV_LOG(WARNING, "Free rss template failed");
- nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+ nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
}
}
@@ -641,7 +641,7 @@ int hinic_config_mq_mode(struct rte_eth_dev *dev, bool on)
int ret = 0;
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
ret = hinic_config_mq_rx_rss(nic_dev, on);
break;
default:
@@ -662,7 +662,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
int lro_wqe_num;
int buf_size;
- if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+ if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
if (rss_conf.rss_hf == 0) {
rss_conf.rss_hf = HINIC_RSS_OFFLOAD_ALL;
} else if ((rss_conf.rss_hf & HINIC_RSS_OFFLOAD_ALL) == 0) {
@@ -678,7 +678,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
}
/* Enable both L3/L4 rx checksum offload */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
nic_dev->rx_csum_en = HINIC_RX_CSUM_OFFLOAD_EN;
err = hinic_set_rx_csum_offload(nic_dev->hwdev,
@@ -687,7 +687,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
goto rx_csum_ofl_err;
/* config lro */
- lro_en = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ?
+ lro_en = dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ?
true : false;
max_lro_size = dev->data->dev_conf.rxmode.max_lro_pkt_size;
buf_size = nic_dev->hwdev->nic_io->rq_buf_size;
@@ -726,7 +726,7 @@ void hinic_rx_remove_configure(struct rte_eth_dev *dev)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+ if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
hinic_rss_deinit(nic_dev);
hinic_destroy_num_qps(nic_dev);
}
diff --git a/drivers/net/hinic/hinic_pmd_rx.h b/drivers/net/hinic/hinic_pmd_rx.h
index 8a45f2d9fc50..5c303398b635 100644
--- a/drivers/net/hinic/hinic_pmd_rx.h
+++ b/drivers/net/hinic/hinic_pmd_rx.h
@@ -8,17 +8,17 @@
#define HINIC_DEFAULT_RX_FREE_THRESH 32
#define HINIC_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 |\
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 |\
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
enum rq_completion_fmt {
RQ_COMPLETE_SGE = 1
diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index b71e2e9ea451..953c146d0200 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -1536,7 +1536,7 @@ hns3_dcb_hw_configure(struct hns3_adapter *hns)
return ret;
}
- if (hw->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (hw->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
dcb_rx_conf = &hw->data->dev_conf.rx_adv_conf.dcb_rx_conf;
if (dcb_rx_conf->nb_tcs == 0)
hw->dcb_info.pfc_en = 1; /* tc0 only */
@@ -1693,7 +1693,7 @@ hns3_update_queue_map_configure(struct hns3_adapter *hns)
uint16_t nb_tx_q = hw->data->nb_tx_queues;
int ret;
- if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
return 0;
ret = hns3_dcb_update_tc_queue_mapping(hw, nb_rx_q, nb_tx_q);
@@ -1713,22 +1713,22 @@ static void
hns3_get_fc_mode(struct hns3_hw *hw, enum rte_eth_fc_mode mode)
{
switch (mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
hw->requested_fc_mode = HNS3_FC_NONE;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
hw->requested_fc_mode = HNS3_FC_RX_PAUSE;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
hw->requested_fc_mode = HNS3_FC_TX_PAUSE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
hw->requested_fc_mode = HNS3_FC_FULL;
break;
default:
hw->requested_fc_mode = HNS3_FC_NONE;
hns3_warn(hw, "fc_mode(%u) exceeds member scope and is "
- "configured to RTE_FC_NONE", mode);
+ "configured to RTE_ETH_FC_NONE", mode);
break;
}
}
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 7d37004972bf..64d1da09a707 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -60,29 +60,29 @@ enum hns3_evt_cause {
};
static const struct rte_eth_fec_capa speed_fec_capa_tbl[] = {
- { ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
- { ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
- { ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
- { ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
- { ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
- { ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) }
};
@@ -500,8 +500,8 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
struct hns3_cmd_desc desc;
int ret;
- if ((vlan_type != ETH_VLAN_TYPE_INNER &&
- vlan_type != ETH_VLAN_TYPE_OUTER)) {
+ if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+ vlan_type != RTE_ETH_VLAN_TYPE_OUTER)) {
hns3_err(hw, "Unsupported vlan type, vlan_type =%d", vlan_type);
return -EINVAL;
}
@@ -514,10 +514,10 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MAC_VLAN_TYPE_ID, false);
rx_req = (struct hns3_rx_vlan_type_cfg_cmd *)desc.data;
- if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
- } else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+ } else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
rx_req->in_fst_vlan_type = rte_cpu_to_le_16(tpid);
@@ -725,11 +725,11 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
rte_spinlock_lock(&hw->lock);
rxmode = &dev->data->dev_conf.rxmode;
tmp_mask = (unsigned int)mask;
- if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
/* ignore vlan filter configuration during promiscuous mode */
if (!dev->data->promiscuous) {
/* Enable or disable VLAN filter */
- enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER ?
+ enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ?
true : false;
ret = hns3_enable_vlan_filter(hns, enable);
@@ -742,9 +742,9 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
}
- if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
- enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP ?
+ enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ?
true : false;
ret = hns3_en_hw_strip_rxvtag(hns, enable);
@@ -1118,7 +1118,7 @@ hns3_init_vlan_config(struct hns3_adapter *hns)
return ret;
}
- ret = hns3_vlan_tpid_configure(hns, ETH_VLAN_TYPE_INNER,
+ ret = hns3_vlan_tpid_configure(hns, RTE_ETH_VLAN_TYPE_INNER,
RTE_ETHER_TYPE_VLAN);
if (ret) {
hns3_err(hw, "tpid set fail in pf, ret =%d", ret);
@@ -1161,7 +1161,7 @@ hns3_restore_vlan_conf(struct hns3_adapter *hns)
if (!hw->data->promiscuous) {
/* restore vlan filter states */
offloads = hw->data->dev_conf.rxmode.offloads;
- enable = offloads & DEV_RX_OFFLOAD_VLAN_FILTER ? true : false;
+ enable = offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ? true : false;
ret = hns3_enable_vlan_filter(hns, enable);
if (ret) {
hns3_err(hw, "failed to restore vlan rx filter conf, "
@@ -1204,7 +1204,7 @@ hns3_dev_configure_vlan(struct rte_eth_dev *dev)
txmode->hw_vlan_reject_untagged);
/* Apply vlan offload setting */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
ret = hns3_vlan_offload_set(dev, mask);
if (ret) {
hns3_err(hw, "dev config rx vlan offload failed, ret = %d",
@@ -2218,9 +2218,9 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
int max_tc = 0;
int i;
- if ((rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG) ||
- (tx_mq_mode == ETH_MQ_TX_VMDQ_DCB ||
- tx_mq_mode == ETH_MQ_TX_VMDQ_ONLY)) {
+ if ((rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) ||
+ (tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB ||
+ tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)) {
hns3_err(hw, "VMDQ is not supported, rx_mq_mode = %d, tx_mq_mode = %d.",
rx_mq_mode, tx_mq_mode);
return -EOPNOTSUPP;
@@ -2228,7 +2228,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
dcb_rx_conf = &dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
dcb_tx_conf = &dev->data->dev_conf.tx_adv_conf.dcb_tx_conf;
- if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
if (dcb_rx_conf->nb_tcs > pf->tc_max) {
hns3_err(hw, "nb_tcs(%u) > max_tc(%u) driver supported.",
dcb_rx_conf->nb_tcs, pf->tc_max);
@@ -2237,7 +2237,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
if (!(dcb_rx_conf->nb_tcs == HNS3_4_TCS ||
dcb_rx_conf->nb_tcs == HNS3_8_TCS)) {
- hns3_err(hw, "on ETH_MQ_RX_DCB_RSS mode, "
+ hns3_err(hw, "on RTE_ETH_MQ_RX_DCB_RSS mode, "
"nb_tcs(%d) != %d or %d in rx direction.",
dcb_rx_conf->nb_tcs, HNS3_4_TCS, HNS3_8_TCS);
return -EINVAL;
@@ -2380,7 +2380,7 @@ hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
uint16_t mtu;
int ret;
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME))
return 0;
/*
@@ -2440,11 +2440,11 @@ hns3_check_link_speed(struct hns3_hw *hw, uint32_t link_speeds)
* configure link_speeds (default 0), which means auto-negotiation.
* In this case, it should return success.
*/
- if (link_speeds == ETH_LINK_SPEED_AUTONEG &&
+ if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG &&
hw->mac.support_autoneg == 0)
return 0;
- if (link_speeds != ETH_LINK_SPEED_AUTONEG) {
+ if (link_speeds != RTE_ETH_LINK_SPEED_AUTONEG) {
ret = hns3_check_port_speed(hw, link_speeds);
if (ret)
return ret;
@@ -2504,15 +2504,15 @@ hns3_dev_configure(struct rte_eth_dev *dev)
if (ret)
goto cfg_err;
- if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
ret = hns3_setup_dcb(dev);
if (ret)
goto cfg_err;
}
/* When RSS is not configured, redirect the packet queue 0 */
- if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
rss_conf = conf->rx_adv_conf.rss_conf;
hw->rss_dis_flag = false;
ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -2533,7 +2533,7 @@ hns3_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
/* config hardware GRO */
- gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+ gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
ret = hns3_config_gro(hw, gro_en);
if (ret)
goto cfg_err;
@@ -2633,10 +2633,10 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (is_jumbo_frame)
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
@@ -2649,15 +2649,15 @@ hns3_get_copper_port_speed_capa(uint32_t supported_speed)
uint32_t speed_capa = 0;
if (supported_speed & HNS3_PHY_LINK_SPEED_10M_HD_BIT)
- speed_capa |= ETH_LINK_SPEED_10M_HD;
+ speed_capa |= RTE_ETH_LINK_SPEED_10M_HD;
if (supported_speed & HNS3_PHY_LINK_SPEED_10M_BIT)
- speed_capa |= ETH_LINK_SPEED_10M;
+ speed_capa |= RTE_ETH_LINK_SPEED_10M;
if (supported_speed & HNS3_PHY_LINK_SPEED_100M_HD_BIT)
- speed_capa |= ETH_LINK_SPEED_100M_HD;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
if (supported_speed & HNS3_PHY_LINK_SPEED_100M_BIT)
- speed_capa |= ETH_LINK_SPEED_100M;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M;
if (supported_speed & HNS3_PHY_LINK_SPEED_1000M_BIT)
- speed_capa |= ETH_LINK_SPEED_1G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G;
return speed_capa;
}
@@ -2668,19 +2668,19 @@ hns3_get_firber_port_speed_capa(uint32_t supported_speed)
uint32_t speed_capa = 0;
if (supported_speed & HNS3_FIBER_LINK_SPEED_1G_BIT)
- speed_capa |= ETH_LINK_SPEED_1G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_10G_BIT)
- speed_capa |= ETH_LINK_SPEED_10G;
+ speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_25G_BIT)
- speed_capa |= ETH_LINK_SPEED_25G;
+ speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_40G_BIT)
- speed_capa |= ETH_LINK_SPEED_40G;
+ speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_50G_BIT)
- speed_capa |= ETH_LINK_SPEED_50G;
+ speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_100G_BIT)
- speed_capa |= ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_200G_BIT)
- speed_capa |= ETH_LINK_SPEED_200G;
+ speed_capa |= RTE_ETH_LINK_SPEED_200G;
return speed_capa;
}
@@ -2699,7 +2699,7 @@ hns3_get_speed_capa(struct hns3_hw *hw)
hns3_get_firber_port_speed_capa(mac->supported_speed);
if (mac->support_autoneg == 0)
- speed_capa |= ETH_LINK_SPEED_FIXED;
+ speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
return speed_capa;
}
@@ -2725,41 +2725,41 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
info->max_mac_addrs = HNS3_UC_MACADDR_NUM;
info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
- info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_SCTP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_TCP_LRO);
- info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+ info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO);
+ info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
hns3_txvlan_cap_get(hw));
if (hns3_dev_outer_udp_cksum_supported(hw))
- info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
if (hns3_dev_indep_txrx_supported(hw))
info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
if (hns3_dev_ptp_supported(hw))
- info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+ info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
info->rx_desc_lim = (struct rte_eth_desc_lim) {
.nb_max = HNS3_MAX_RING_DESC,
@@ -2843,7 +2843,7 @@ hns3_update_port_link_info(struct rte_eth_dev *eth_dev)
ret = hns3_update_link_info(eth_dev);
if (ret)
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
return ret;
}
@@ -2856,29 +2856,29 @@ hns3_setup_linkstatus(struct rte_eth_dev *eth_dev,
struct hns3_mac *mac = &hw->mac;
switch (mac->link_speed) {
- case ETH_SPEED_NUM_10M:
- case ETH_SPEED_NUM_100M:
- case ETH_SPEED_NUM_1G:
- case ETH_SPEED_NUM_10G:
- case ETH_SPEED_NUM_25G:
- case ETH_SPEED_NUM_40G:
- case ETH_SPEED_NUM_50G:
- case ETH_SPEED_NUM_100G:
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_200G:
if (mac->link_status)
new_link->link_speed = mac->link_speed;
break;
default:
if (mac->link_status)
- new_link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+ new_link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
}
if (!mac->link_status)
- new_link->link_speed = ETH_SPEED_NUM_NONE;
+ new_link->link_speed = RTE_ETH_SPEED_NUM_NONE;
new_link->link_duplex = mac->link_duplex;
- new_link->link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+ new_link->link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
new_link->link_autoneg = mac->link_autoneg;
}
@@ -2898,8 +2898,8 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
if (eth_dev->data->dev_started == 0) {
new_link.link_autoneg = mac->link_autoneg;
new_link.link_duplex = mac->link_duplex;
- new_link.link_speed = ETH_SPEED_NUM_NONE;
- new_link.link_status = ETH_LINK_DOWN;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ new_link.link_status = RTE_ETH_LINK_DOWN;
goto out;
}
@@ -2911,7 +2911,7 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
break;
}
- if (!wait_to_complete || mac->link_status == ETH_LINK_UP)
+ if (!wait_to_complete || mac->link_status == RTE_ETH_LINK_UP)
break;
rte_delay_ms(HNS3_LINK_CHECK_INTERVAL);
@@ -3257,31 +3257,31 @@ hns3_parse_speed(int speed_cmd, uint32_t *speed)
{
switch (speed_cmd) {
case HNS3_CFG_SPEED_10M:
- *speed = ETH_SPEED_NUM_10M;
+ *speed = RTE_ETH_SPEED_NUM_10M;
break;
case HNS3_CFG_SPEED_100M:
- *speed = ETH_SPEED_NUM_100M;
+ *speed = RTE_ETH_SPEED_NUM_100M;
break;
case HNS3_CFG_SPEED_1G:
- *speed = ETH_SPEED_NUM_1G;
+ *speed = RTE_ETH_SPEED_NUM_1G;
break;
case HNS3_CFG_SPEED_10G:
- *speed = ETH_SPEED_NUM_10G;
+ *speed = RTE_ETH_SPEED_NUM_10G;
break;
case HNS3_CFG_SPEED_25G:
- *speed = ETH_SPEED_NUM_25G;
+ *speed = RTE_ETH_SPEED_NUM_25G;
break;
case HNS3_CFG_SPEED_40G:
- *speed = ETH_SPEED_NUM_40G;
+ *speed = RTE_ETH_SPEED_NUM_40G;
break;
case HNS3_CFG_SPEED_50G:
- *speed = ETH_SPEED_NUM_50G;
+ *speed = RTE_ETH_SPEED_NUM_50G;
break;
case HNS3_CFG_SPEED_100G:
- *speed = ETH_SPEED_NUM_100G;
+ *speed = RTE_ETH_SPEED_NUM_100G;
break;
case HNS3_CFG_SPEED_200G:
- *speed = ETH_SPEED_NUM_200G;
+ *speed = RTE_ETH_SPEED_NUM_200G;
break;
default:
return -EINVAL;
@@ -3610,39 +3610,39 @@ hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
hns3_set_bit(req->speed_dup, HNS3_CFG_DUPLEX_B, !!duplex ? 1 : 0);
switch (speed) {
- case ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_10M:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10M);
break;
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100M);
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_1G);
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10G);
break;
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_25G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_25G);
break;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_40G);
break;
- case ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_50G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_50G);
break;
- case ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_100G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100G);
break;
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_200G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_200G);
break;
@@ -4305,14 +4305,14 @@ hns3_mac_init(struct hns3_hw *hw)
int ret;
pf->support_sfp_query = true;
- mac->link_duplex = ETH_LINK_FULL_DUPLEX;
+ mac->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
ret = hns3_cfg_mac_speed_dup_hw(hw, mac->link_speed, mac->link_duplex);
if (ret) {
PMD_INIT_LOG(ERR, "Config mac speed dup fail ret = %d", ret);
return ret;
}
- mac->link_status = ETH_LINK_DOWN;
+ mac->link_status = RTE_ETH_LINK_DOWN;
return hns3_config_mtu(hw, pf->mps);
}
@@ -4562,7 +4562,7 @@ hns3_dev_promiscuous_enable(struct rte_eth_dev *dev)
* all packets coming in in the receiving direction.
*/
offloads = dev->data->dev_conf.rxmode.offloads;
- if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
ret = hns3_enable_vlan_filter(hns, false);
if (ret) {
hns3_err(hw, "failed to enable promiscuous mode due to "
@@ -4603,7 +4603,7 @@ hns3_dev_promiscuous_disable(struct rte_eth_dev *dev)
}
/* when promiscuous mode was disabled, restore the vlan filter status */
offloads = dev->data->dev_conf.rxmode.offloads;
- if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
ret = hns3_enable_vlan_filter(hns, true);
if (ret) {
hns3_err(hw, "failed to disable promiscuous mode due to"
@@ -4723,8 +4723,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
mac_info->supported_speed =
rte_le_to_cpu_32(resp->supported_speed);
mac_info->support_autoneg = resp->autoneg_ability;
- mac_info->link_autoneg = (resp->autoneg == 0) ? ETH_LINK_FIXED
- : ETH_LINK_AUTONEG;
+ mac_info->link_autoneg = (resp->autoneg == 0) ? RTE_ETH_LINK_FIXED
+ : RTE_ETH_LINK_AUTONEG;
} else {
mac_info->query_type = HNS3_DEFAULT_QUERY;
}
@@ -4735,8 +4735,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
static uint8_t
hns3_check_speed_dup(uint8_t duplex, uint32_t speed)
{
- if (!(speed == ETH_SPEED_NUM_10M || speed == ETH_SPEED_NUM_100M))
- duplex = ETH_LINK_FULL_DUPLEX;
+ if (!(speed == RTE_ETH_SPEED_NUM_10M || speed == RTE_ETH_SPEED_NUM_100M))
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
return duplex;
}
@@ -4786,7 +4786,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
return ret;
/* Do nothing if no SFP */
- if (mac_info.link_speed == ETH_SPEED_NUM_NONE)
+ if (mac_info.link_speed == RTE_ETH_SPEED_NUM_NONE)
return 0;
/*
@@ -4813,7 +4813,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
/* Config full duplex for SFP */
return hns3_cfg_mac_speed_dup(hw, mac_info.link_speed,
- ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_FULL_DUPLEX);
}
static void
@@ -4932,10 +4932,10 @@ hns3_cfg_mac_mode(struct hns3_hw *hw, bool enable)
hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_B, val);
/*
- * If DEV_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
+ * If RTE_ETH_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
* when receiving frames. Otherwise, CRC will be stripped.
*/
- if (hw->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (hw->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, 0);
else
hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, val);
@@ -4963,7 +4963,7 @@ hns3_get_mac_link_status(struct hns3_hw *hw)
ret = hns3_cmd_send(hw, &desc, 1);
if (ret) {
hns3_err(hw, "get link status cmd failed %d", ret);
- return ETH_LINK_DOWN;
+ return RTE_ETH_LINK_DOWN;
}
req = (struct hns3_link_status_cmd *)desc.data;
@@ -5145,19 +5145,19 @@ hns3_set_firber_default_support_speed(struct hns3_hw *hw)
struct hns3_mac *mac = &hw->mac;
switch (mac->link_speed) {
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
return HNS3_FIBER_LINK_SPEED_1G_BIT;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
return HNS3_FIBER_LINK_SPEED_10G_BIT;
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_25G:
return HNS3_FIBER_LINK_SPEED_25G_BIT;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
return HNS3_FIBER_LINK_SPEED_40G_BIT;
- case ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_50G:
return HNS3_FIBER_LINK_SPEED_50G_BIT;
- case ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_100G:
return HNS3_FIBER_LINK_SPEED_100G_BIT;
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_200G:
return HNS3_FIBER_LINK_SPEED_200G_BIT;
default:
hns3_warn(hw, "invalid speed %u Mbps.", mac->link_speed);
@@ -5395,20 +5395,20 @@ hns3_convert_link_speeds2bitmap_copper(uint32_t link_speeds)
{
uint32_t speed_bit;
- switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_10M:
+ switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_10M:
speed_bit = HNS3_PHY_LINK_SPEED_10M_BIT;
break;
- case ETH_LINK_SPEED_10M_HD:
+ case RTE_ETH_LINK_SPEED_10M_HD:
speed_bit = HNS3_PHY_LINK_SPEED_10M_HD_BIT;
break;
- case ETH_LINK_SPEED_100M:
+ case RTE_ETH_LINK_SPEED_100M:
speed_bit = HNS3_PHY_LINK_SPEED_100M_BIT;
break;
- case ETH_LINK_SPEED_100M_HD:
+ case RTE_ETH_LINK_SPEED_100M_HD:
speed_bit = HNS3_PHY_LINK_SPEED_100M_HD_BIT;
break;
- case ETH_LINK_SPEED_1G:
+ case RTE_ETH_LINK_SPEED_1G:
speed_bit = HNS3_PHY_LINK_SPEED_1000M_BIT;
break;
default:
@@ -5424,26 +5424,26 @@ hns3_convert_link_speeds2bitmap_fiber(uint32_t link_speeds)
{
uint32_t speed_bit;
- switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_1G:
+ switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_1G:
speed_bit = HNS3_FIBER_LINK_SPEED_1G_BIT;
break;
- case ETH_LINK_SPEED_10G:
+ case RTE_ETH_LINK_SPEED_10G:
speed_bit = HNS3_FIBER_LINK_SPEED_10G_BIT;
break;
- case ETH_LINK_SPEED_25G:
+ case RTE_ETH_LINK_SPEED_25G:
speed_bit = HNS3_FIBER_LINK_SPEED_25G_BIT;
break;
- case ETH_LINK_SPEED_40G:
+ case RTE_ETH_LINK_SPEED_40G:
speed_bit = HNS3_FIBER_LINK_SPEED_40G_BIT;
break;
- case ETH_LINK_SPEED_50G:
+ case RTE_ETH_LINK_SPEED_50G:
speed_bit = HNS3_FIBER_LINK_SPEED_50G_BIT;
break;
- case ETH_LINK_SPEED_100G:
+ case RTE_ETH_LINK_SPEED_100G:
speed_bit = HNS3_FIBER_LINK_SPEED_100G_BIT;
break;
- case ETH_LINK_SPEED_200G:
+ case RTE_ETH_LINK_SPEED_200G:
speed_bit = HNS3_FIBER_LINK_SPEED_200G_BIT;
break;
default:
@@ -5478,28 +5478,28 @@ hns3_check_port_speed(struct hns3_hw *hw, uint32_t link_speeds)
static inline uint32_t
hns3_get_link_speed(uint32_t link_speeds)
{
- uint32_t speed = ETH_SPEED_NUM_NONE;
-
- if (link_speeds & ETH_LINK_SPEED_10M ||
- link_speeds & ETH_LINK_SPEED_10M_HD)
- speed = ETH_SPEED_NUM_10M;
- if (link_speeds & ETH_LINK_SPEED_100M ||
- link_speeds & ETH_LINK_SPEED_100M_HD)
- speed = ETH_SPEED_NUM_100M;
- if (link_speeds & ETH_LINK_SPEED_1G)
- speed = ETH_SPEED_NUM_1G;
- if (link_speeds & ETH_LINK_SPEED_10G)
- speed = ETH_SPEED_NUM_10G;
- if (link_speeds & ETH_LINK_SPEED_25G)
- speed = ETH_SPEED_NUM_25G;
- if (link_speeds & ETH_LINK_SPEED_40G)
- speed = ETH_SPEED_NUM_40G;
- if (link_speeds & ETH_LINK_SPEED_50G)
- speed = ETH_SPEED_NUM_50G;
- if (link_speeds & ETH_LINK_SPEED_100G)
- speed = ETH_SPEED_NUM_100G;
- if (link_speeds & ETH_LINK_SPEED_200G)
- speed = ETH_SPEED_NUM_200G;
+ uint32_t speed = RTE_ETH_SPEED_NUM_NONE;
+
+ if (link_speeds & RTE_ETH_LINK_SPEED_10M ||
+ link_speeds & RTE_ETH_LINK_SPEED_10M_HD)
+ speed = RTE_ETH_SPEED_NUM_10M;
+ if (link_speeds & RTE_ETH_LINK_SPEED_100M ||
+ link_speeds & RTE_ETH_LINK_SPEED_100M_HD)
+ speed = RTE_ETH_SPEED_NUM_100M;
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
+ speed = RTE_ETH_SPEED_NUM_1G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
+ speed = RTE_ETH_SPEED_NUM_10G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_25G)
+ speed = RTE_ETH_SPEED_NUM_25G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_40G)
+ speed = RTE_ETH_SPEED_NUM_40G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_50G)
+ speed = RTE_ETH_SPEED_NUM_50G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_100G)
+ speed = RTE_ETH_SPEED_NUM_100G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_200G)
+ speed = RTE_ETH_SPEED_NUM_200G;
return speed;
}
@@ -5507,11 +5507,11 @@ hns3_get_link_speed(uint32_t link_speeds)
static uint8_t
hns3_get_link_duplex(uint32_t link_speeds)
{
- if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
- (link_speeds & ETH_LINK_SPEED_100M_HD))
- return ETH_LINK_HALF_DUPLEX;
+ if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+ (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+ return RTE_ETH_LINK_HALF_DUPLEX;
else
- return ETH_LINK_FULL_DUPLEX;
+ return RTE_ETH_LINK_FULL_DUPLEX;
}
static int
@@ -5645,9 +5645,9 @@ hns3_apply_link_speed(struct hns3_hw *hw)
struct hns3_set_link_speed_cfg cfg;
memset(&cfg, 0, sizeof(struct hns3_set_link_speed_cfg));
- cfg.autoneg = (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) ?
- ETH_LINK_AUTONEG : ETH_LINK_FIXED;
- if (cfg.autoneg != ETH_LINK_AUTONEG) {
+ cfg.autoneg = (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) ?
+ RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
+ if (cfg.autoneg != RTE_ETH_LINK_AUTONEG) {
cfg.speed = hns3_get_link_speed(conf->link_speeds);
cfg.duplex = hns3_get_link_duplex(conf->link_speeds);
}
@@ -5920,7 +5920,7 @@ hns3_do_stop(struct hns3_adapter *hns)
ret = hns3_cfg_mac_mode(hw, false);
if (ret)
return ret;
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0) {
hns3_configure_all_mac_addr(hns, true);
@@ -6131,17 +6131,17 @@ hns3_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
current_mode = hns3_get_current_fc_mode(dev);
switch (current_mode) {
case HNS3_FC_FULL:
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
case HNS3_FC_TX_PAUSE:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case HNS3_FC_RX_PAUSE:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case HNS3_FC_NONE:
default:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
break;
}
@@ -6287,7 +6287,7 @@ hns3_get_dcb_info(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info)
int i;
rte_spinlock_lock(&hw->lock);
- if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = pf->local_max_tc;
else
dcb_info->nb_tcs = 1;
@@ -6587,7 +6587,7 @@ hns3_stop_service(struct hns3_adapter *hns)
struct rte_eth_dev *eth_dev;
eth_dev = &rte_eth_devices[hw->data->port_id];
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
if (hw->adapter_state == HNS3_NIC_STARTED) {
rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
hns3_update_linkstatus_and_event(hw, false);
@@ -6877,7 +6877,7 @@ get_current_fec_auto_state(struct hns3_hw *hw, uint8_t *state)
* in device of link speed
* below 10 Gbps.
*/
- if (hw->mac.link_speed < ETH_SPEED_NUM_10G) {
+ if (hw->mac.link_speed < RTE_ETH_SPEED_NUM_10G) {
*state = 0;
return 0;
}
@@ -6909,7 +6909,7 @@ hns3_fec_get_internal(struct hns3_hw *hw, uint32_t *fec_capa)
* configured FEC mode is returned.
* If link is up, current FEC mode is returned.
*/
- if (hw->mac.link_status == ETH_LINK_DOWN) {
+ if (hw->mac.link_status == RTE_ETH_LINK_DOWN) {
ret = get_current_fec_auto_state(hw, &auto_state);
if (ret)
return ret;
@@ -7008,12 +7008,12 @@ get_current_speed_fec_cap(struct hns3_hw *hw, struct rte_eth_fec_capa *fec_capa)
uint32_t cur_capa;
switch (mac->link_speed) {
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
cur_capa = fec_capa[1].capa;
break;
- case ETH_SPEED_NUM_25G:
- case ETH_SPEED_NUM_100G:
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_200G:
cur_capa = fec_capa[0].capa;
break;
default:
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 0e4e4269a12f..c40d28af1d46 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -191,10 +191,10 @@ struct hns3_mac {
bool default_addr_setted; /* whether default addr(mac_addr) is set */
uint8_t media_type;
uint8_t phy_addr;
- uint8_t link_duplex : 1; /* ETH_LINK_[HALF/FULL]_DUPLEX */
- uint8_t link_autoneg : 1; /* ETH_LINK_[AUTONEG/FIXED] */
- uint8_t link_status : 1; /* ETH_LINK_[DOWN/UP] */
- uint32_t link_speed; /* ETH_SPEED_NUM_ */
+ uint8_t link_duplex : 1; /* RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+ uint8_t link_autoneg : 1; /* RTE_ETH_LINK_[AUTONEG/FIXED] */
+ uint8_t link_status : 1; /* RTE_ETH_LINK_[DOWN/UP] */
+ uint32_t link_speed; /* RTE_ETH_SPEED_NUM_ */
/*
* Some firmware versions support only the SFP speed query. In addition
* to the SFP speed query, some firmware supports the query of the speed
@@ -1114,9 +1114,9 @@ static inline uint64_t
hns3_txvlan_cap_get(struct hns3_hw *hw)
{
if (hw->port_base_vlan_cfg.state)
- return DEV_TX_OFFLOAD_VLAN_INSERT;
+ return RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
else
- return DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT;
+ return RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
}
#endif /* _HNS3_ETHDEV_H_ */
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 8d9b7979c806..53d79bb2106c 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -809,15 +809,15 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
}
hw->adapter_state = HNS3_NIC_CONFIGURING;
- if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
hns3_err(hw, "setting link speed/duplex not supported");
ret = -EINVAL;
goto cfg_err;
}
/* When RSS is not configured, redirect the packet queue 0 */
- if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
hw->rss_dis_flag = false;
rss_conf = conf->rx_adv_conf.rss_conf;
ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -829,7 +829,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
* If jumbo frames are enabled, MTU needs to be refreshed
* according to the maximum RX packet length.
*/
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
@@ -853,7 +853,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
/* config hardware GRO */
- gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+ gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
ret = hns3_config_gro(hw, gro_en);
if (ret)
goto cfg_err;
@@ -931,10 +931,10 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
@@ -963,33 +963,33 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
- info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_SCTP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_TCP_LRO);
- info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+ info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO);
+ info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
hns3_txvlan_cap_get(hw));
if (hns3_dev_outer_udp_cksum_supported(hw))
- info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
if (hns3_dev_indep_txrx_supported(hw))
info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -1669,10 +1669,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
tmp_mask = (unsigned int)mask;
- if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
rte_spinlock_lock(&hw->lock);
/* Enable or disable VLAN filter */
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ret = hns3vf_en_vlan_filter(hw, true);
else
ret = hns3vf_en_vlan_filter(hw, false);
@@ -1682,10 +1682,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
/* Vlan stripping setting */
- if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
rte_spinlock_lock(&hw->lock);
/* Enable or disable VLAN stripping */
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
ret = hns3vf_en_hw_strip_rxvtag(hw, true);
else
ret = hns3vf_en_hw_strip_rxvtag(hw, false);
@@ -1753,7 +1753,7 @@ hns3vf_restore_vlan_conf(struct hns3_adapter *hns)
int ret;
dev_conf = &hw->data->dev_conf;
- en = dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP ? true
+ en = dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ? true
: false;
ret = hns3vf_en_hw_strip_rxvtag(hw, en);
if (ret)
@@ -1778,8 +1778,8 @@ hns3vf_dev_configure_vlan(struct rte_eth_dev *dev)
}
/* Apply vlan offload setting */
- ret = hns3vf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK);
+ ret = hns3vf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK);
if (ret)
hns3_err(hw, "dev config vlan offload failed, ret = %d.", ret);
@@ -2088,7 +2088,7 @@ hns3vf_do_stop(struct hns3_adapter *hns)
struct hns3_hw *hw = &hns->hw;
int ret;
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
/*
* The "hns3vf_do_stop" function will also be called by .stop_service to
@@ -2247,31 +2247,31 @@ hns3vf_dev_link_update(struct rte_eth_dev *eth_dev,
memset(&new_link, 0, sizeof(new_link));
switch (mac->link_speed) {
- case ETH_SPEED_NUM_10M:
- case ETH_SPEED_NUM_100M:
- case ETH_SPEED_NUM_1G:
- case ETH_SPEED_NUM_10G:
- case ETH_SPEED_NUM_25G:
- case ETH_SPEED_NUM_40G:
- case ETH_SPEED_NUM_50G:
- case ETH_SPEED_NUM_100G:
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_200G:
if (mac->link_status)
new_link.link_speed = mac->link_speed;
break;
default:
if (mac->link_status)
- new_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
}
if (!mac->link_status)
- new_link.link_speed = ETH_SPEED_NUM_NONE;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
new_link.link_duplex = mac->link_duplex;
- new_link.link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+ new_link.link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
new_link.link_autoneg =
- !(eth_dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED);
+ !(eth_dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED);
return rte_eth_linkstatus_set(eth_dev, &new_link);
}
@@ -2599,11 +2599,11 @@ hns3vf_stop_service(struct hns3_adapter *hns)
* Make sure call update link status before hns3vf_stop_poll_job
* because update link status depend on polling job exist.
*/
- hns3vf_update_link_status(hw, ETH_LINK_DOWN, hw->mac.link_speed,
+ hns3vf_update_link_status(hw, RTE_ETH_LINK_DOWN, hw->mac.link_speed,
hw->mac.link_duplex);
hns3vf_stop_poll_job(eth_dev);
}
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
hns3_set_rxtx_function(eth_dev);
rte_wmb();
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index fc77979c5f14..0ac8705b590b 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1298,10 +1298,10 @@ hns3_rss_input_tuple_supported(struct hns3_hw *hw,
* Kunpeng930 and future kunpeng series support to use src/dst port
* fields to RSS hash for IPv6 SCTP packet type.
*/
- if (rss->types & (ETH_RSS_L4_DST_ONLY | ETH_RSS_L4_SRC_ONLY) &&
- (rss->types & ETH_RSS_IP ||
+ if (rss->types & (RTE_ETH_RSS_L4_DST_ONLY | RTE_ETH_RSS_L4_SRC_ONLY) &&
+ (rss->types & RTE_ETH_RSS_IP ||
(!hw->rss_info.ipv6_sctp_offload_supported &&
- rss->types & ETH_RSS_NONFRAG_IPV6_SCTP)))
+ rss->types & RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
return false;
return true;
diff --git a/drivers/net/hns3/hns3_ptp.c b/drivers/net/hns3/hns3_ptp.c
index df8485904688..395590c86c03 100644
--- a/drivers/net/hns3/hns3_ptp.c
+++ b/drivers/net/hns3/hns3_ptp.c
@@ -21,7 +21,7 @@ hns3_mbuf_dyn_rx_timestamp_register(struct rte_eth_dev *dev,
struct hns3_hw *hw = &hns->hw;
int ret;
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
return 0;
ret = rte_mbuf_dyn_rx_timestamp_register
--git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 3a81e90e0911..85495bbe89d9 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -76,69 +76,69 @@ static const struct {
uint64_t rss_types;
uint64_t rss_field;
} hns3_set_tuple_table[] = {
- { ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
- { ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) },
- { ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
- { ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) },
- { ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) },
- { ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) },
};
@@ -146,44 +146,44 @@ static const struct {
uint64_t rss_types;
uint64_t rss_field;
} hns3_set_rss_types[] = {
- { ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
+ { RTE_ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER) },
- { ETH_RSS_NONFRAG_IPV4_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
- { ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
+ { RTE_ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) |
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER) },
- { ETH_RSS_NONFRAG_IPV6_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }
};
@@ -365,10 +365,10 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
* When user does not specify the following types or a combination of
* the following types, it enables all fields for the supported RSS
* types. the following types as:
- * - ETH_RSS_L3_SRC_ONLY
- * - ETH_RSS_L3_DST_ONLY
- * - ETH_RSS_L4_SRC_ONLY
- * - ETH_RSS_L4_DST_ONLY
+ * - RTE_ETH_RSS_L3_SRC_ONLY
+ * - RTE_ETH_RSS_L3_DST_ONLY
+ * - RTE_ETH_RSS_L4_SRC_ONLY
+ * - RTE_ETH_RSS_L4_DST_ONLY
*/
if (fields_count == 0) {
for (i = 0; i < RTE_DIM(hns3_set_rss_types); i++) {
@@ -520,8 +520,8 @@ hns3_dev_rss_reta_update(struct rte_eth_dev *dev,
memcpy(indirection_tbl, rss_cfg->rss_indirection_tbl,
sizeof(rss_cfg->rss_indirection_tbl));
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].reta[shift] >= hw->alloc_rss_size) {
rte_spinlock_unlock(&hw->lock);
hns3_err(hw, "queue id(%u) set to redirection table "
@@ -572,8 +572,8 @@ hns3_dev_rss_reta_query(struct rte_eth_dev *dev,
}
rte_spinlock_lock(&hw->lock);
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] =
rss_cfg->rss_indirection_tbl[i];
@@ -692,7 +692,7 @@ hns3_config_rss(struct hns3_adapter *hns)
}
/* When RSS is off, redirect the packet queue 0 */
- if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) == 0)
+ if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0)
hns3_rss_uninit(hns);
/* Configure RSS hash algorithm and hash key offset */
@@ -709,7 +709,7 @@ hns3_config_rss(struct hns3_adapter *hns)
* When RSS is off, it doesn't need to configure rss redirection table
* to hardware.
*/
- if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+ if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
ret = hns3_set_rss_indir_table(hw, rss_cfg->rss_indirection_tbl,
hw->rss_ind_tbl_size);
if (ret)
@@ -723,7 +723,7 @@ hns3_config_rss(struct hns3_adapter *hns)
return ret;
rss_indir_table_uninit:
- if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+ if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
ret1 = hns3_rss_reset_indir_table(hw);
if (ret1 != 0)
return ret;
--git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h
index 996083b88b25..6f153a1b7bfb 100644
--- a/drivers/net/hns3/hns3_rss.h
+++ b/drivers/net/hns3/hns3_rss.h
@@ -8,20 +8,20 @@
#include <rte_flow.h>
#define HNS3_ETH_RSS_SUPPORT ( \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY)
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY)
#define HNS3_RSS_IND_TBL_SIZE 512 /* The size of hash lookup table */
#define HNS3_RSS_IND_TBL_SIZE_MAX 2048
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 0f222b37f9d1..01e43791572b 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1912,7 +1912,7 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
memset(&rxq->dfx_stats, 0, sizeof(struct hns3_rx_dfx_stats));
/* CRC len set here is used for amending packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1957,7 +1957,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
rxq->rx_buf_len);
}
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len)
dev->data->scattered_rx = true;
}
@@ -2833,7 +2833,7 @@ hns3_get_rx_function(struct rte_eth_dev *dev)
vec_allowed = vec_support && hns3_get_default_vec_support();
sve_allowed = vec_support && hns3_get_sve_support();
simple_allowed = !dev->data->scattered_rx &&
- (offloads & DEV_RX_OFFLOAD_TCP_LRO) == 0;
+ (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) == 0;
if (hns->rx_func_hint == HNS3_IO_FUNC_HINT_VEC && vec_allowed)
return hns3_recv_pkts_vec;
@@ -3127,7 +3127,7 @@ hns3_restore_gro_conf(struct hns3_hw *hw)
int ret;
offloads = hw->data->dev_conf.rxmode.offloads;
- gro_en = offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+ gro_en = offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
ret = hns3_config_gro(hw, gro_en);
if (ret)
hns3_err(hw, "restore hardware GRO to %s failed, ret = %d",
@@ -4279,7 +4279,7 @@ hns3_tx_check_simple_support(struct rte_eth_dev *dev)
if (hns3_dev_ptp_supported(hw))
return false;
- return (offloads == (offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE));
+ return (offloads == (offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE));
}
static bool
@@ -4291,16 +4291,16 @@ hns3_get_tx_prep_needed(struct rte_eth_dev *dev)
return true;
#else
#define HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK (\
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | \
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO)
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)
uint64_t tx_offload = dev->data->dev_conf.txmode.offloads;
if (tx_offload & HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index cd7c21c1d0c8..2fa3a01dd3bf 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -307,7 +307,7 @@ struct hns3_rx_queue {
uint16_t rx_rearm_start; /* index of BD that driver re-arming from */
uint16_t rx_rearm_nb; /* number of remaining BDs to be re-armed */
- /* 4 if DEV_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
+ /* 4 if RTE_ETH_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
uint8_t crc_len;
/*
diff --git a/drivers/net/hns3/hns3_rxtx_vec.c b/drivers/net/hns3/hns3_rxtx_vec.c
index 844512f6ceec..d01a8d62bfb1 100644
--- a/drivers/net/hns3/hns3_rxtx_vec.c
+++ b/drivers/net/hns3/hns3_rxtx_vec.c
@@ -22,8 +22,8 @@ hns3_tx_check_vec_support(struct rte_eth_dev *dev)
if (hns3_dev_ptp_supported(hw))
return -ENOTSUP;
- /* Only support DEV_TX_OFFLOAD_MBUF_FAST_FREE */
- if (txmode->offloads != DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ /* Only support RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE */
+ if (txmode->offloads != RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
return -ENOTSUP;
return 0;
@@ -228,10 +228,10 @@ hns3_rxq_vec_check(struct hns3_rx_queue *rxq, void *arg)
int
hns3_rx_check_vec_support(struct rte_eth_dev *dev)
{
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- uint64_t offloads_mask = DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_VLAN;
+ uint64_t offloads_mask = RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_VLAN;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (hns3_dev_ptp_supported(hw))
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed17a..c199a87c6df4 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1641,7 +1641,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
/* Set the global registers with default ether type value */
if (!pf->support_multi_driver) {
- ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+ ret = i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
RTE_ETHER_TYPE_VLAN);
if (ret != I40E_SUCCESS) {
PMD_INIT_LOG(ERR,
@@ -1909,8 +1909,8 @@ i40e_dev_configure(struct rte_eth_dev *dev)
ad->tx_simple_allowed = true;
ad->tx_vec_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Only legacy filter API needs the following fdir config. So when the
* legacy filter API is deprecated, the following codes should also be
@@ -1944,13 +1944,13 @@ i40e_dev_configure(struct rte_eth_dev *dev)
* number, which will be available after rx_queue_setup(). dev_start()
* function is good to place RSS setup.
*/
- if (mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+ if (mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) {
ret = i40e_vmdq_setup(dev);
if (ret)
goto err;
}
- if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if (mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
ret = i40e_dcb_setup(dev);
if (ret) {
PMD_DRV_LOG(ERR, "failed to configure DCB.");
@@ -2227,17 +2227,17 @@ i40e_parse_link_speeds(uint16_t link_speeds)
{
uint8_t link_speed = I40E_LINK_SPEED_UNKNOWN;
- if (link_speeds & ETH_LINK_SPEED_40G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_40G)
link_speed |= I40E_LINK_SPEED_40GB;
- if (link_speeds & ETH_LINK_SPEED_25G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_25G)
link_speed |= I40E_LINK_SPEED_25GB;
- if (link_speeds & ETH_LINK_SPEED_20G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_20G)
link_speed |= I40E_LINK_SPEED_20GB;
- if (link_speeds & ETH_LINK_SPEED_10G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
link_speed |= I40E_LINK_SPEED_10GB;
- if (link_speeds & ETH_LINK_SPEED_1G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
link_speed |= I40E_LINK_SPEED_1GB;
- if (link_speeds & ETH_LINK_SPEED_100M)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100M)
link_speed |= I40E_LINK_SPEED_100MB;
return link_speed;
@@ -2345,13 +2345,13 @@ i40e_apply_link_speed(struct rte_eth_dev *dev)
abilities |= I40E_AQ_PHY_ENABLE_ATOMIC_LINK |
I40E_AQ_PHY_LINK_ENABLED;
- if (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
- conf->link_speeds = ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_20G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_100M;
+ if (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
+ conf->link_speeds = RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_20G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_100M;
abilities |= I40E_AQ_PHY_AN_ENABLED;
} else {
@@ -2910,34 +2910,34 @@ update_link_reg(struct i40e_hw *hw, struct rte_eth_link *link)
/* Parse the link status */
switch (link_speed) {
case I40E_REG_SPEED_0:
- link->link_speed = ETH_SPEED_NUM_100M;
+ link->link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case I40E_REG_SPEED_1:
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case I40E_REG_SPEED_2:
if (hw->mac.type == I40E_MAC_X722)
- link->link_speed = ETH_SPEED_NUM_2_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
else
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case I40E_REG_SPEED_3:
if (hw->mac.type == I40E_MAC_X722) {
- link->link_speed = ETH_SPEED_NUM_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_5G;
} else {
reg_val = I40E_READ_REG(hw, I40E_PRTMAC_MACC);
if (reg_val & I40E_REG_MACC_25GB)
- link->link_speed = ETH_SPEED_NUM_25G;
+ link->link_speed = RTE_ETH_SPEED_NUM_25G;
else
- link->link_speed = ETH_SPEED_NUM_40G;
+ link->link_speed = RTE_ETH_SPEED_NUM_40G;
}
break;
case I40E_REG_SPEED_4:
if (hw->mac.type == I40E_MAC_X722)
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
else
- link->link_speed = ETH_SPEED_NUM_20G;
+ link->link_speed = RTE_ETH_SPEED_NUM_20G;
break;
default:
PMD_DRV_LOG(ERR, "Unknown link speed info %u", link_speed);
@@ -2964,8 +2964,8 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
status = i40e_aq_get_link_info(hw, enable_lse,
&link_status, NULL);
if (unlikely(status != I40E_SUCCESS)) {
- link->link_speed = ETH_SPEED_NUM_NONE;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR, "Failed to get link info");
return;
}
@@ -2980,28 +2980,28 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
/* Parse the link status */
switch (link_status.link_speed) {
case I40E_LINK_SPEED_100MB:
- link->link_speed = ETH_SPEED_NUM_100M;
+ link->link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case I40E_LINK_SPEED_1GB:
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case I40E_LINK_SPEED_10GB:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case I40E_LINK_SPEED_20GB:
- link->link_speed = ETH_SPEED_NUM_20G;
+ link->link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case I40E_LINK_SPEED_25GB:
- link->link_speed = ETH_SPEED_NUM_25G;
+ link->link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case I40E_LINK_SPEED_40GB:
- link->link_speed = ETH_SPEED_NUM_40G;
+ link->link_speed = RTE_ETH_SPEED_NUM_40G;
break;
default:
if (link->link_status)
- link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
else
- link->link_speed = ETH_SPEED_NUM_NONE;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
}
@@ -3018,9 +3018,9 @@ i40e_dev_link_update(struct rte_eth_dev *dev,
memset(&link, 0, sizeof(link));
/* i40e uses full duplex only */
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
if (!wait_to_complete && !enable_lse)
update_link_reg(hw, &link);
@@ -3748,34 +3748,34 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
dev_info->rx_queue_offload_capa = 0;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_RSS_HASH;
-
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
dev_info->tx_queue_offload_capa;
dev_info->dev_capa =
RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -3834,7 +3834,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
if (I40E_PHY_TYPE_SUPPORT_40G(hw->phy.phy_types)) {
/* For XL710 */
- dev_info->speed_capa = ETH_LINK_SPEED_40G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_40G;
dev_info->default_rxportconf.nb_queues = 2;
dev_info->default_txportconf.nb_queues = 2;
if (dev->data->nb_rx_queues == 1)
@@ -3848,17 +3848,17 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
} else if (I40E_PHY_TYPE_SUPPORT_25G(hw->phy.phy_types)) {
/* For XXV710 */
- dev_info->speed_capa = ETH_LINK_SPEED_25G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_25G;
dev_info->default_rxportconf.nb_queues = 1;
dev_info->default_txportconf.nb_queues = 1;
dev_info->default_rxportconf.ring_size = 256;
dev_info->default_txportconf.ring_size = 256;
} else {
/* For X710 */
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
dev_info->default_rxportconf.nb_queues = 1;
dev_info->default_txportconf.nb_queues = 1;
- if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_10G) {
+ if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_10G) {
dev_info->default_rxportconf.ring_size = 512;
dev_info->default_txportconf.ring_size = 256;
} else {
@@ -3897,7 +3897,7 @@ i40e_vlan_tpid_set_by_registers(struct rte_eth_dev *dev,
int ret;
if (qinq) {
- if (vlan_type == ETH_VLAN_TYPE_OUTER)
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
reg_id = 2;
}
@@ -3944,12 +3944,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
int qinq = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_EXTEND;
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
int ret = 0;
- if ((vlan_type != ETH_VLAN_TYPE_INNER &&
- vlan_type != ETH_VLAN_TYPE_OUTER) ||
- (!qinq && vlan_type == ETH_VLAN_TYPE_INNER)) {
+ if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+ vlan_type != RTE_ETH_VLAN_TYPE_OUTER) ||
+ (!qinq && vlan_type == RTE_ETH_VLAN_TYPE_INNER)) {
PMD_DRV_LOG(ERR,
"Unsupported vlan type.");
return -EINVAL;
@@ -3963,12 +3963,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
/* 802.1ad frames ability is added in NVM API 1.7*/
if (hw->flags & I40E_HW_FLAG_802_1AD_CAPABLE) {
if (qinq) {
- if (vlan_type == ETH_VLAN_TYPE_OUTER)
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
hw->first_tag = rte_cpu_to_le_16(tpid);
- else if (vlan_type == ETH_VLAN_TYPE_INNER)
+ else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER)
hw->second_tag = rte_cpu_to_le_16(tpid);
} else {
- if (vlan_type == ETH_VLAN_TYPE_OUTER)
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
hw->second_tag = rte_cpu_to_le_16(tpid);
}
ret = i40e_aq_set_switch_config(hw, 0, 0, 0, NULL);
@@ -4027,37 +4027,37 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
i40e_vsi_config_vlan_filter(vsi, TRUE);
else
i40e_vsi_config_vlan_filter(vsi, FALSE);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
i40e_vsi_config_vlan_stripping(vsi, TRUE);
else
i40e_vsi_config_vlan_stripping(vsi, FALSE);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
i40e_vsi_config_double_vlan(vsi, TRUE);
/* Set global registers with default ethertype. */
- i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+ i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
RTE_ETHER_TYPE_VLAN);
- i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+ i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
RTE_ETHER_TYPE_VLAN);
}
else
i40e_vsi_config_double_vlan(vsi, FALSE);
}
- if (mask & ETH_QINQ_STRIP_MASK) {
+ if (mask & RTE_ETH_QINQ_STRIP_MASK) {
/* Enable or disable outer VLAN stripping */
- if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
i40e_vsi_config_outer_vlan_stripping(vsi, TRUE);
else
i40e_vsi_config_outer_vlan_stripping(vsi, FALSE);
@@ -4140,17 +4140,17 @@ i40e_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
/* Return current mode according to actual setting*/
switch (hw->fc.current_mode) {
case I40E_FC_FULL:
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
case I40E_FC_TX_PAUSE:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case I40E_FC_RX_PAUSE:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case I40E_FC_NONE:
default:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
};
return 0;
@@ -4166,10 +4166,10 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
struct i40e_hw *hw;
struct i40e_pf *pf;
enum i40e_fc_mode rte_fcmode_2_i40e_fcmode[] = {
- [RTE_FC_NONE] = I40E_FC_NONE,
- [RTE_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
- [RTE_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
- [RTE_FC_FULL] = I40E_FC_FULL
+ [RTE_ETH_FC_NONE] = I40E_FC_NONE,
+ [RTE_ETH_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
+ [RTE_ETH_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
+ [RTE_ETH_FC_FULL] = I40E_FC_FULL
};
/* high_water field in the rte_eth_fc_conf using the kilobytes unit */
@@ -4316,7 +4316,7 @@ i40e_macaddr_add(struct rte_eth_dev *dev,
}
rte_memcpy(&mac_filter.mac_addr, mac_addr, RTE_ETHER_ADDR_LEN);
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
mac_filter.filter_type = I40E_MACVLAN_PERFECT_MATCH;
else
mac_filter.filter_type = I40E_MAC_PERFECT_MATCH;
@@ -4469,7 +4469,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
int ret;
if (reta_size != lut_size ||
- reta_size > ETH_RSS_RETA_SIZE_512) {
+ reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
PMD_DRV_LOG(ERR,
"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
reta_size, lut_size);
@@ -4485,8 +4485,8 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
if (ret)
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
lut[i] = reta_conf[idx].reta[shift];
}
@@ -4512,7 +4512,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
int ret;
if (reta_size != lut_size ||
- reta_size > ETH_RSS_RETA_SIZE_512) {
+ reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
PMD_DRV_LOG(ERR,
"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
reta_size, lut_size);
@@ -4529,8 +4529,8 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
if (ret)
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = lut[i];
}
@@ -4847,7 +4847,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
hw->func_caps.num_vsis - vsi_count);
pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
- ETH_64_POOLS);
+ RTE_ETH_64_POOLS);
if (pf->max_nb_vmdq_vsi) {
pf->flags |= I40E_FLAG_VMDQ;
pf->vmdq_nb_qps = pf->vmdq_nb_qp_max;
@@ -6132,10 +6132,10 @@ i40e_dev_init_vlan(struct rte_eth_dev *dev)
int mask = 0;
/* Apply vlan offload setting */
- mask = ETH_VLAN_STRIP_MASK |
- ETH_QINQ_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_QINQ_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = i40e_vlan_offload_set(dev, mask);
if (ret) {
PMD_DRV_LOG(INFO, "Failed to update vlan offload");
@@ -6262,9 +6262,9 @@ i40e_pf_setup(struct i40e_pf *pf)
/* Configure filter control */
memset(&settings, 0, sizeof(settings));
- if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_128)
+ if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_128)
settings.hash_lut_size = I40E_HASH_LUT_SIZE_128;
- else if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_512)
+ else if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_512)
settings.hash_lut_size = I40E_HASH_LUT_SIZE_512;
else {
PMD_DRV_LOG(ERR, "Hash lookup table size (%u) not supported",
@@ -7117,7 +7117,7 @@ i40e_find_vlan_filter(struct i40e_vsi *vsi,
{
uint32_t vid_idx, vid_bit;
- if (vlan_id > ETH_VLAN_ID_MAX)
+ if (vlan_id > RTE_ETH_VLAN_ID_MAX)
return 0;
vid_idx = I40E_VFTA_IDX(vlan_id);
@@ -7152,7 +7152,7 @@ i40e_set_vlan_filter(struct i40e_vsi *vsi,
struct i40e_aqc_add_remove_vlan_element_data vlan_data = {0};
int ret;
- if (vlan_id > ETH_VLAN_ID_MAX)
+ if (vlan_id > RTE_ETH_VLAN_ID_MAX)
return;
i40e_store_vlan_filter(vsi, vlan_id, on);
@@ -8730,16 +8730,16 @@ i40e_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
I40E_AQC_TUNNEL_TYPE_VXLAN);
break;
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
I40E_AQC_TUNNEL_TYPE_VXLAN_GPE);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -1;
break;
@@ -8765,12 +8765,12 @@ i40e_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
ret = i40e_del_vxlan_port(pf, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -1;
break;
@@ -8862,7 +8862,7 @@ int
i40e_pf_reset_rss_reta(struct i40e_pf *pf)
{
struct i40e_hw *hw = &pf->adapter->hw;
- uint8_t lut[ETH_RSS_RETA_SIZE_512];
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i;
int num;
@@ -8870,7 +8870,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
* configured. It's necessary to calculate the actual PF
* queues that are configured.
*/
- if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
num = i40e_pf_calc_configured_queues_num(pf);
else
num = pf->dev_data->nb_rx_queues;
@@ -8949,7 +8949,7 @@ i40e_pf_config_rss(struct i40e_pf *pf)
rss_hf = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode;
if (!(rss_hf & pf->adapter->flow_types_mask) ||
- !(mq_mode & ETH_MQ_RX_RSS_FLAG))
+ !(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
return 0;
hw = I40E_PF_TO_HW(pf);
@@ -10412,8 +10412,8 @@ i40e_mirror_rule_set(struct rte_eth_dev *dev,
return I40E_ERR_NO_MEMORY;
}
switch (mirror_conf->rule_type) {
- case ETH_MIRROR_VLAN:
- for (i = 0, j = 0; i < ETH_MIRROR_MAX_VLANS; i++) {
+ case RTE_ETH_MIRROR_VLAN:
+ for (i = 0, j = 0; i < RTE_ETH_MIRROR_MAX_VLANS; i++) {
if (mirror_conf->vlan.vlan_mask & (1ULL << i)) {
mirr_rule->entries[j] =
mirror_conf->vlan.vlan_id[i];
@@ -10427,8 +10427,8 @@ i40e_mirror_rule_set(struct rte_eth_dev *dev,
}
mirr_rule->rule_type = I40E_AQC_MIRROR_RULE_TYPE_VLAN;
break;
- case ETH_MIRROR_VIRTUAL_POOL_UP:
- case ETH_MIRROR_VIRTUAL_POOL_DOWN:
+ case RTE_ETH_MIRROR_VIRTUAL_POOL_UP:
+ case RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN:
/* check if the specified pool bit is out of range */
if (mirror_conf->pool_mask > (uint64_t)(1ULL << (pf->vf_num + 1))) {
PMD_DRV_LOG(ERR, "pool mask is out of range.");
@@ -10453,15 +10453,15 @@ i40e_mirror_rule_set(struct rte_eth_dev *dev,
}
/* egress and ingress in aq commands means from switch but not port */
mirr_rule->rule_type =
- (mirror_conf->rule_type == ETH_MIRROR_VIRTUAL_POOL_UP) ?
+ (mirror_conf->rule_type == RTE_ETH_MIRROR_VIRTUAL_POOL_UP) ?
I40E_AQC_MIRROR_RULE_TYPE_VPORT_EGRESS :
I40E_AQC_MIRROR_RULE_TYPE_VPORT_INGRESS;
break;
- case ETH_MIRROR_UPLINK_PORT:
+ case RTE_ETH_MIRROR_UPLINK_PORT:
/* egress and ingress in aq commands means from switch but not port*/
mirr_rule->rule_type = I40E_AQC_MIRROR_RULE_TYPE_ALL_EGRESS;
break;
- case ETH_MIRROR_DOWNLINK_PORT:
+ case RTE_ETH_MIRROR_DOWNLINK_PORT:
mirr_rule->rule_type = I40E_AQC_MIRROR_RULE_TYPE_ALL_INGRESS;
break;
default:
@@ -10603,16 +10603,16 @@ i40e_start_timecounters(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
switch (link.link_speed) {
- case ETH_SPEED_NUM_40G:
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_25G:
tsync_inc_l = I40E_PTP_40GB_INCVAL & 0xFFFFFFFF;
tsync_inc_h = I40E_PTP_40GB_INCVAL >> 32;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
tsync_inc_l = I40E_PTP_10GB_INCVAL & 0xFFFFFFFF;
tsync_inc_h = I40E_PTP_10GB_INCVAL >> 32;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
tsync_inc_l = I40E_PTP_1GB_INCVAL & 0xFFFFFFFF;
tsync_inc_h = I40E_PTP_1GB_INCVAL >> 32;
break;
@@ -10840,7 +10840,7 @@ i40e_parse_dcb_configure(struct rte_eth_dev *dev,
else
*tc_map = RTE_LEN2MASK(dcb_rx_conf->nb_tcs, uint8_t);
- if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
dcb_cfg->pfc.willing = 0;
dcb_cfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS;
dcb_cfg->pfc.pfcenable = *tc_map;
@@ -11348,7 +11348,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
uint16_t bsf, tc_mapping;
int i, j = 0;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = rte_bsf32(vsi->enabled_tc + 1);
else
dcb_info->nb_tcs = 1;
@@ -11396,7 +11396,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
dcb_info->tc_queue.tc_rxq[j][i].nb_queue;
}
j++;
- } while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, ETH_MAX_VMDQ_POOL));
+ } while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, RTE_ETH_MAX_VMDQ_POOL));
return 0;
}
@@ -11774,10 +11774,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (frame_size > I40E_ETH_MAX_LEN)
dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index cd6deabd60b3..f21c2de6bdb9 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -139,17 +139,17 @@ enum i40e_flxpld_layer_idx {
I40E_FLAG_RSS_AQ_CAPABLE)
#define I40E_RSS_OFFLOAD_ALL ( \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L2_PAYLOAD)
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L2_PAYLOAD)
/* All bits of RSS hash enable for X722*/
#define I40E_RSS_HENA_ALL_X722 ( \
@@ -1076,7 +1076,7 @@ struct i40e_rte_flow_rss_conf {
uint8_t key[(I40E_VFQF_HKEY_MAX_INDEX > I40E_PFQF_HKEY_MAX_INDEX ?
I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t)]; /**< Hash key. */
- uint16_t queue[ETH_RSS_RETA_SIZE_512]; /**< Queues indices to use. */
+ uint16_t queue[RTE_ETH_RSS_RETA_SIZE_512]; /**< Queues indices to use. */
bool symmetric_enable; /**< true, if enable symmetric */
uint64_t config_pctypes; /**< All PCTYPES with the flow */
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 0cfe13b7b227..cda426fe5614 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1077,7 +1077,7 @@ i40evf_add_vlan(struct rte_eth_dev *dev, uint16_t vlanid)
* VLAN_STRIP by default. So reconfigure the vlan_offload
* as it was done by the app earlier.
*/
- err = i40evf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK);
+ err = i40evf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK);
if (err)
PMD_DRV_LOG(ERR, "fail to set vlan_strip");
@@ -1403,28 +1403,28 @@ i40evf_handle_pf_event(struct rte_eth_dev *dev, uint8_t *msg,
pf_msg->event_data.link_event_adv.link_status;
switch (pf_msg->event_data.link_event_adv.link_speed) {
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
vf->link_speed = VIRTCHNL_LINK_SPEED_100MB;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
vf->link_speed = VIRTCHNL_LINK_SPEED_1GB;
break;
- case ETH_SPEED_NUM_2_5G:
+ case RTE_ETH_SPEED_NUM_2_5G:
vf->link_speed = VIRTCHNL_LINK_SPEED_2_5GB;
break;
- case ETH_SPEED_NUM_5G:
+ case RTE_ETH_SPEED_NUM_5G:
vf->link_speed = VIRTCHNL_LINK_SPEED_5GB;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
vf->link_speed = VIRTCHNL_LINK_SPEED_10GB;
break;
- case ETH_SPEED_NUM_20G:
+ case RTE_ETH_SPEED_NUM_20G:
vf->link_speed = VIRTCHNL_LINK_SPEED_20GB;
break;
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_25G:
vf->link_speed = VIRTCHNL_LINK_SPEED_25GB;
break;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
vf->link_speed = VIRTCHNL_LINK_SPEED_40GB;
break;
default:
@@ -1770,7 +1770,7 @@ static int
i40evf_init_vlan(struct rte_eth_dev *dev)
{
/* Apply vlan offload setting */
- i40evf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK);
+ i40evf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK);
return 0;
}
@@ -1785,9 +1785,9 @@ i40evf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return -ENOTSUP;
/* Vlan stripping setting */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
i40evf_enable_vlan_strip(dev);
else
i40evf_disable_vlan_strip(dev);
@@ -1933,7 +1933,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct i40e_rx_queue *rxq)
/**
* Check if the jumbo frame and maximum packet length are set correctly
*/
- if (dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -1954,7 +1954,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct i40e_rx_queue *rxq)
}
}
- if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
rxq->max_pkt_len > buf_size)
dev_data->scattered_rx = 1;
@@ -2290,35 +2290,35 @@ i40evf_dev_link_update(struct rte_eth_dev *dev,
/* Linux driver PF host */
switch (vf->link_speed) {
case I40E_LINK_SPEED_100MB:
- new_link.link_speed = ETH_SPEED_NUM_100M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case I40E_LINK_SPEED_1GB:
- new_link.link_speed = ETH_SPEED_NUM_1G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case I40E_LINK_SPEED_10GB:
- new_link.link_speed = ETH_SPEED_NUM_10G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case I40E_LINK_SPEED_20GB:
- new_link.link_speed = ETH_SPEED_NUM_20G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case I40E_LINK_SPEED_25GB:
- new_link.link_speed = ETH_SPEED_NUM_25G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case I40E_LINK_SPEED_40GB:
- new_link.link_speed = ETH_SPEED_NUM_40G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
default:
if (vf->link_up)
- new_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
else
- new_link.link_speed = ETH_SPEED_NUM_NONE;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
/* full duplex only */
- new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
- new_link.link_status = vf->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ new_link.link_status = vf->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
new_link.link_autoneg =
- !(dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED);
+ !(dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED);
return rte_eth_linkstatus_set(dev, &new_link);
}
@@ -2367,36 +2367,36 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mtu = dev_info->max_rx_pktlen - I40E_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
dev_info->hash_key_size = (I40E_VFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_64;
dev_info->flow_type_rss_offloads = vf->adapter->flow_types_mask;
dev_info->max_mac_addrs = I40E_NUM_MACADDR_MAX;
dev_info->rx_queue_offload_capa = 0;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_queue_offload_capa = 0;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -2596,10 +2596,10 @@ i40evf_dev_rss_reta_update(struct rte_eth_dev *dev,
uint16_t i, idx, shift;
int ret;
- if (reta_size != ETH_RSS_RETA_SIZE_64) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_64) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number of hardware can "
- "support (%d)", reta_size, ETH_RSS_RETA_SIZE_64);
+ "support (%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_64);
return -EINVAL;
}
@@ -2612,8 +2612,8 @@ i40evf_dev_rss_reta_update(struct rte_eth_dev *dev,
if (ret)
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
lut[i] = reta_conf[idx].reta[shift];
}
@@ -2635,10 +2635,10 @@ i40evf_dev_rss_reta_query(struct rte_eth_dev *dev,
uint8_t *lut;
int ret;
- if (reta_size != ETH_RSS_RETA_SIZE_64) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_64) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number of hardware can "
- "support (%d)", reta_size, ETH_RSS_RETA_SIZE_64);
+ "support (%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_64);
return -EINVAL;
}
@@ -2652,8 +2652,8 @@ i40evf_dev_rss_reta_query(struct rte_eth_dev *dev,
if (ret)
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = lut[i];
}
@@ -2770,7 +2770,7 @@ i40evf_config_rss(struct i40e_vf *vf)
uint8_t *lut_info;
int ret;
- if (vf->dev_data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (vf->dev_data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
i40evf_disable_rss(vf);
PMD_DRV_LOG(DEBUG, "RSS not configured");
return 0;
@@ -2887,10 +2887,10 @@ i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (frame_size > I40E_ETH_MAX_LEN)
dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
return ret;
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 3c1570bd9c47..d1cb992be61d 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2015,7 +2015,7 @@ i40e_get_outer_vlan(struct rte_eth_dev *dev)
{
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
int qinq = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_EXTEND;
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
uint64_t reg_r = 0;
uint16_t reg_id;
uint16_t tpid;
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 1fb8c9abfcc6..3755d4d3fe2a 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -102,47 +102,47 @@ struct i40e_hash_map_rss_inset {
const struct i40e_hash_map_rss_inset i40e_hash_rss_inset[] = {
/* IPv4 */
- { ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
- { ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+ { RTE_ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+ { RTE_ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
- { ETH_RSS_NONFRAG_IPV4_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
- { ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
/* IPv6 */
- { ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
- { ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+ { RTE_ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+ { RTE_ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
- { ETH_RSS_NONFRAG_IPV6_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
- { ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
/* Port */
- { ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
+ { RTE_ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
/* Ether */
- { ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
- { ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
+ { RTE_ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
+ { RTE_ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
/* VLAN */
- { ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
- { ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
+ { RTE_ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
+ { RTE_ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
};
#define I40E_HASH_VOID_NEXT_ALLOW BIT_ULL(RTE_FLOW_ITEM_TYPE_ETH)
@@ -201,30 +201,30 @@ struct i40e_hash_match_pattern {
#define I40E_HASH_MAP_CUS_PATTERN(pattern, rss_mask, cus_pctype) { \
pattern, rss_mask, true, cus_pctype }
-#define I40E_HASH_L2_RSS_MASK (ETH_RSS_VLAN | ETH_RSS_ETH | \
- ETH_RSS_L2_SRC_ONLY | \
- ETH_RSS_L2_DST_ONLY)
+#define I40E_HASH_L2_RSS_MASK (RTE_ETH_RSS_VLAN | RTE_ETH_RSS_ETH | \
+ RTE_ETH_RSS_L2_SRC_ONLY | \
+ RTE_ETH_RSS_L2_DST_ONLY)
#define I40E_HASH_L23_RSS_MASK (I40E_HASH_L2_RSS_MASK | \
- ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY)
+ RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY)
-#define I40E_HASH_IPV4_L23_RSS_MASK (ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
-#define I40E_HASH_IPV6_L23_RSS_MASK (ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV4_L23_RSS_MASK (RTE_ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV6_L23_RSS_MASK (RTE_ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
#define I40E_HASH_L234_RSS_MASK (I40E_HASH_L23_RSS_MASK | \
- ETH_RSS_PORT | ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY)
+ RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY)
-#define I40E_HASH_IPV4_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV4)
-#define I40E_HASH_IPV6_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV6)
+#define I40E_HASH_IPV4_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV4)
+#define I40E_HASH_IPV6_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV6)
-#define I40E_HASH_L4_TYPES (ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define I40E_HASH_L4_TYPES (RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
/* Current supported patterns and RSS types.
* All items that have the same pattern types are together.
@@ -232,68 +232,68 @@ struct i40e_hash_match_pattern {
static const struct i40e_hash_match_pattern match_patterns[] = {
/* Ether */
I40E_HASH_MAP_PATTERN(I40E_PHINT_ETH,
- ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
+ RTE_ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
I40E_FILTER_PCTYPE_L2_PAYLOAD),
/* IPv4 */
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
- ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
+ RTE_ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
I40E_FILTER_PCTYPE_FRAG_IPV4),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
- ETH_RSS_NONFRAG_IPV4_OTHER |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
I40E_HASH_IPV4_L23_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_OTHER),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_TCP,
- ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_TCP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_UDP,
- ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_UDP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_SCTP,
- ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_SCTP),
/* IPv6 */
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
- ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
+ RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
I40E_FILTER_PCTYPE_FRAG_IPV6),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
- ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
I40E_HASH_IPV6_L23_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_OTHER),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_TCP,
- ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_TCP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_UDP,
- ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_UDP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_SCTP,
- ETH_RSS_NONFRAG_IPV6_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_SCTP),
/* ESP */
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_UDP_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_UDP_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
/* GTPC */
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPC,
@@ -308,27 +308,27 @@ static const struct i40e_hash_match_pattern match_patterns[] = {
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_CUSTOMIZED_GTPU),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV4,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV6,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU,
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_CUSTOMIZED_GTPU),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV4,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV6,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
/* L2TPV3 */
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_L2TPV3,
- ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
+ RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_L2TPV3,
- ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
+ RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
/* AH */
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, ETH_RSS_AH,
+ I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, RTE_ETH_RSS_AH,
I40E_CUSTOMIZED_AH_IPV4),
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, ETH_RSS_AH,
+ I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, RTE_ETH_RSS_AH,
I40E_CUSTOMIZED_AH_IPV6),
};
@@ -564,29 +564,29 @@ i40e_hash_get_inset(uint64_t rss_types)
/* If SRC_ONLY and DST_ONLY of the same level are used simultaneously,
* it is the same case as none of them are added.
*/
- mask = rss_types & (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY);
- if (mask == ETH_RSS_L2_SRC_ONLY)
+ mask = rss_types & (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L2_SRC_ONLY)
inset &= ~I40E_INSET_DMAC;
- else if (mask == ETH_RSS_L2_DST_ONLY)
+ else if (mask == RTE_ETH_RSS_L2_DST_ONLY)
inset &= ~I40E_INSET_SMAC;
- mask = rss_types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
- if (mask == ETH_RSS_L3_SRC_ONLY)
+ mask = rss_types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L3_SRC_ONLY)
inset &= ~(I40E_INSET_IPV4_DST | I40E_INSET_IPV6_DST);
- else if (mask == ETH_RSS_L3_DST_ONLY)
+ else if (mask == RTE_ETH_RSS_L3_DST_ONLY)
inset &= ~(I40E_INSET_IPV4_SRC | I40E_INSET_IPV6_SRC);
- mask = rss_types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
- if (mask == ETH_RSS_L4_SRC_ONLY)
+ mask = rss_types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L4_SRC_ONLY)
inset &= ~I40E_INSET_DST_PORT;
- else if (mask == ETH_RSS_L4_DST_ONLY)
+ else if (mask == RTE_ETH_RSS_L4_DST_ONLY)
inset &= ~I40E_INSET_SRC_PORT;
if (rss_types & I40E_HASH_L4_TYPES) {
uint64_t l3_mask = rss_types &
- (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+ (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
uint64_t l4_mask = rss_types &
- (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+ (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
if (l3_mask && !l4_mask)
inset &= ~(I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT);
@@ -825,7 +825,7 @@ i40e_hash_config(struct i40e_pf *pf,
/* Update lookup table */
if (rss_info->queue_num > 0) {
- uint8_t lut[ETH_RSS_RETA_SIZE_512];
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i, j = 0;
for (i = 0; i < hw->func_caps.rss_table_size; i++) {
@@ -932,7 +932,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
"RSS key is ignored when queues specified");
pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
max_queue = i40e_pf_calc_configured_queues_num(pf);
else
max_queue = pf->dev_data->nb_rx_queues;
@@ -1070,22 +1070,22 @@ i40e_hash_validate_rss_types(uint64_t rss_types)
uint64_t type, mask;
/* Validate L2 */
- type = ETH_RSS_ETH & rss_types;
- mask = (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY) & rss_types;
+ type = RTE_ETH_RSS_ETH & rss_types;
+ mask = (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY) & rss_types;
if (!type && mask)
return false;
/* Validate L3 */
- type = (I40E_HASH_L4_TYPES | ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_IPV6 |
- ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
- mask = (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY) & rss_types;
+ type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
+ mask = (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY) & rss_types;
if (!type && mask)
return false;
/* Validate L4 */
- type = (I40E_HASH_L4_TYPES | ETH_RSS_PORT) & rss_types;
- mask = (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY) & rss_types;
+ type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_PORT) & rss_types;
+ mask = (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY) & rss_types;
if (!type && mask)
return false;
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index e2d8b2b5f7f1..ccb3924a5f68 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -1207,24 +1207,24 @@ i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
event.event_data.link_event.link_status =
dev->data->dev_link.link_status;
- /* need to convert the ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
+ /* need to convert the RTE_ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
switch (dev->data->dev_link.link_speed) {
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_100MB;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_1GB;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_10GB;
break;
- case ETH_SPEED_NUM_20G:
+ case RTE_ETH_SPEED_NUM_20G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_20GB;
break;
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_25G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_25GB;
break;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_40GB;
break;
default:
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 8329cbdd4e30..3bad4052ed1b 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1329,7 +1329,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
for (i = 0; i < tx_rs_thresh; i++)
rte_prefetch0((txep + i)->mbuf);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
if (k) {
for (j = 0; j != k; j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
for (i = 0; i < RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
@@ -2005,7 +2005,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->queue_id = queue_idx;
rxq->reg_idx = reg_idx;
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -2265,7 +2265,7 @@ i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
}
/* check simple tx conflict */
if (ad->tx_simple_allowed) {
- if ((txq->offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
+ if ((txq->offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST) {
PMD_DRV_LOG(ERR, "No-simple tx is required.");
return -EINVAL;
@@ -2925,7 +2925,7 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
rxq->max_pkt_len =
RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len *
rxq->rx_buf_len), data->dev_conf.rxmode.max_rx_pkt_len);
- if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
@@ -3441,7 +3441,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
/* Use a simple Tx queue if possible (only fast free is allowed) */
ad->tx_simple_allowed =
(txq->offloads ==
- (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+ (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
txq->tx_rs_thresh >= RTE_PMD_I40E_TX_MAX_BURST);
ad->tx_vec_allowed = (ad->tx_simple_allowed &&
txq->tx_rs_thresh <= RTE_I40E_TX_MAX_FREE_BUF_SZ);
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 5ccf5773e857..303a4db47dbd 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -120,7 +120,7 @@ struct i40e_rx_queue {
bool rx_deferred_start; /**< don't start this queue in dev start */
uint16_t rx_using_sse; /**<flag indicate the usage of vPMD for rx */
uint8_t dcb_tc; /**< Traffic class of rx queue */
- uint64_t offloads; /**< Rx offload flags of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Rx offload flags of RTE_ETH_RX_OFFLOAD_* */
};
struct i40e_tx_entry {
@@ -165,7 +165,7 @@ struct i40e_tx_queue {
bool q_set; /**< indicate if tx queue has been configured */
bool tx_deferred_start; /**< don't start this queue in dev start */
uint8_t dcb_tc; /**< Traffic class of tx queue */
- uint64_t offloads; /**< Tx offload flags of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Tx offload flags of RTE_ETH_RX_OFFLOAD_* */
};
/** Offload features */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index bd21d6422394..5f00d43950aa 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -899,7 +899,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
txep = (void *)txq->sw_ring;
txep += txq->tx_next_dd - (n - 1);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
void **cache_objs;
struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index f52ed98d62d0..0192164c35fa 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -100,7 +100,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
*/
txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
for (i = 0; i < n; i++) {
free[i] = txep[i].mbuf;
txep[i].mbuf = NULL;
@@ -211,7 +211,7 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
struct i40e_rx_queue *rxq;
uint16_t desc, i;
bool first_queue;
@@ -221,11 +221,11 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
return -1;
/* no header split support */
- if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
return -1;
/* no QinQ support */
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
return -1;
/**
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 0481b5538132..6d90b0f3511b 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -42,30 +42,30 @@ i40e_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX;
dev_info->hash_key_size = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_64;
dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
dev_info->max_mac_addrs = I40E_NUM_MACADDR_MAX;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -385,19 +385,19 @@ i40e_vf_representor_vlan_offload_set(struct rte_eth_dev *ethdev, int mask)
return -EINVAL;
}
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* Enable or disable VLAN filtering offload */
if (ethdev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
return i40e_vsi_config_vlan_filter(vsi, TRUE);
else
return i40e_vsi_config_vlan_filter(vsi, FALSE);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping offload */
if (ethdev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP)
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
return i40e_vsi_config_vlan_stripping(vsi, TRUE);
else
return i40e_vsi_config_vlan_stripping(vsi, FALSE);
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index b3bd07811198..1d4383e89327 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -48,18 +48,18 @@
VIRTCHNL_VF_OFFLOAD_RX_POLLING)
#define IAVF_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
#define IAVF_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
#define IAVF_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 574cfe055e7c..fc0087968b78 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -265,53 +265,53 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
static const uint64_t map_hena_rss[] = {
/* IPv4 */
[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP] =
- ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP] =
- ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_UDP] =
- ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK] =
- ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP] =
- ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_SCTP] =
- ETH_RSS_NONFRAG_IPV4_SCTP,
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_OTHER] =
- ETH_RSS_NONFRAG_IPV4_OTHER,
- [IAVF_FILTER_PCTYPE_FRAG_IPV4] = ETH_RSS_FRAG_IPV4,
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+ [IAVF_FILTER_PCTYPE_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
/* IPv6 */
[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP] =
- ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP,
[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP] =
- ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_UDP] =
- ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK] =
- ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP] =
- ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_SCTP] =
- ETH_RSS_NONFRAG_IPV6_SCTP,
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_OTHER] =
- ETH_RSS_NONFRAG_IPV6_OTHER,
- [IAVF_FILTER_PCTYPE_FRAG_IPV6] = ETH_RSS_FRAG_IPV6,
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+ [IAVF_FILTER_PCTYPE_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
/* L2 Payload */
- [IAVF_FILTER_PCTYPE_L2_PAYLOAD] = ETH_RSS_L2_PAYLOAD
+ [IAVF_FILTER_PCTYPE_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
};
- const uint64_t ipv4_rss = ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP |
- ETH_RSS_NONFRAG_IPV4_OTHER |
- ETH_RSS_FRAG_IPV4;
+ const uint64_t ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+ RTE_ETH_RSS_FRAG_IPV4;
- const uint64_t ipv6_rss = ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_SCTP |
- ETH_RSS_NONFRAG_IPV6_OTHER |
- ETH_RSS_FRAG_IPV6;
+ const uint64_t ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_FRAG_IPV6;
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
uint64_t caps = 0, hena = 0, valid_rss_hf = 0;
@@ -330,13 +330,13 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
}
/**
- * ETH_RSS_IPV4 and ETH_RSS_IPV6 can be considered as 2
+ * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
* generalizations of all other IPv4 and IPv6 RSS types.
*/
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
rss_hf |= ipv4_rss;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
rss_hf |= ipv6_rss;
RTE_BUILD_BUG_ON(RTE_DIM(map_hena_rss) > sizeof(uint64_t) * CHAR_BIT);
@@ -362,10 +362,10 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
}
if (valid_rss_hf & ipv4_rss)
- valid_rss_hf |= rss_hf & ETH_RSS_IPV4;
+ valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
if (valid_rss_hf & ipv6_rss)
- valid_rss_hf |= rss_hf & ETH_RSS_IPV6;
+ valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
if (rss_hf & ~valid_rss_hf)
PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
@@ -466,7 +466,7 @@ iavf_dev_vlan_insert_set(struct rte_eth_dev *dev)
return 0;
enable = !!(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_VLAN_INSERT);
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
iavf_config_vlan_insert_v2(adapter, enable);
return 0;
@@ -478,10 +478,10 @@ iavf_dev_init_vlan(struct rte_eth_dev *dev)
int err;
err = iavf_dev_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK |
- ETH_QINQ_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK);
+ RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_QINQ_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK);
if (err) {
PMD_DRV_LOG(ERR, "Failed to update vlan offload");
return err;
@@ -511,8 +511,8 @@ iavf_dev_configure(struct rte_eth_dev *dev)
ad->rx_vec_allowed = true;
ad->tx_vec_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Large VF setting */
if (num_queue_pairs > IAVF_MAX_NUM_QUEUES_DFLT) {
@@ -585,7 +585,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
if (max_pkt_len <= IAVF_ETH_MAX_LEN ||
max_pkt_len > IAVF_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -608,7 +608,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
}
rxq->max_pkt_len = max_pkt_len;
- if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
rxq->max_pkt_len > buf_size) {
dev_data->scattered_rx = 1;
}
@@ -943,35 +943,35 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->flow_type_rss_offloads = IAVF_RSS_OFFLOAD_ALL;
dev_info->max_mac_addrs = IAVF_NUM_MACADDR_MAX;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_CRC)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_KEEP_CRC;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_free_thresh = IAVF_DEFAULT_RX_FREE_THRESH,
@@ -1031,42 +1031,42 @@ iavf_dev_link_update(struct rte_eth_dev *dev,
*/
switch (vf->link_speed) {
case 10:
- new_link.link_speed = ETH_SPEED_NUM_10M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case 100:
- new_link.link_speed = ETH_SPEED_NUM_100M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case 1000:
- new_link.link_speed = ETH_SPEED_NUM_1G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case 10000:
- new_link.link_speed = ETH_SPEED_NUM_10G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case 20000:
- new_link.link_speed = ETH_SPEED_NUM_20G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case 25000:
- new_link.link_speed = ETH_SPEED_NUM_25G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case 40000:
- new_link.link_speed = ETH_SPEED_NUM_40G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case 50000:
- new_link.link_speed = ETH_SPEED_NUM_50G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case 100000:
- new_link.link_speed = ETH_SPEED_NUM_100G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- new_link.link_speed = ETH_SPEED_NUM_NONE;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
- new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
- new_link.link_status = vf->link_up ? ETH_LINK_UP :
- ETH_LINK_DOWN;
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ new_link.link_status = vf->link_up ? RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN;
new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
return rte_eth_linkstatus_set(dev, &new_link);
}
@@ -1214,14 +1214,14 @@ iavf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
bool enable;
int err;
- if (mask & ETH_VLAN_FILTER_MASK) {
- enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
iavf_iterate_vlan_filters_v2(dev, enable);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
- enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
err = iavf_config_vlan_strip_v2(adapter, enable);
/* If not support, the stripping is already disabled by PF */
@@ -1250,9 +1250,9 @@ iavf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return -ENOTSUP;
/* Vlan stripping setting */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
err = iavf_enable_vlan_strip(adapter);
else
err = iavf_disable_vlan_strip(adapter);
@@ -1294,8 +1294,8 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
rte_memcpy(lut, vf->rss_lut, reta_size);
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
lut[i] = reta_conf[idx].reta[shift];
}
@@ -1331,8 +1331,8 @@ iavf_dev_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = vf->rss_lut[i];
}
@@ -1457,10 +1457,10 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (frame_size > IAVF_ETH_MAX_LEN)
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -1564,7 +1564,7 @@ iavf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
ret = iavf_query_stats(adapter, &pstats);
if (ret == 0) {
uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_KEEP_CRC) ? 0 :
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
RTE_ETHER_CRC_LEN;
iavf_update_stats(vsi, pstats);
stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index 2b03dad8589c..1329a389f742 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -341,83 +341,83 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
/* rss type super set */
/* IPv4 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4)
+#define IAVF_RSS_TYPE_OUTER_IPV4 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4)
#define IAVF_RSS_TYPE_OUTER_IPV4_UDP (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP)
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
#define IAVF_RSS_TYPE_OUTER_IPV4_TCP (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP)
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP)
#define IAVF_RSS_TYPE_OUTER_IPV4_SCTP (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
/* IPv6 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV6 (ETH_RSS_ETH | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_OUTER_IPV6 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
#define IAVF_RSS_TYPE_OUTER_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_FRAG_IPV6)
+ RTE_ETH_RSS_FRAG_IPV6)
#define IAVF_RSS_TYPE_OUTER_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define IAVF_RSS_TYPE_OUTER_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
#define IAVF_RSS_TYPE_OUTER_IPV6_SCTP (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
/* VLAN IPV4 */
#define IAVF_RSS_TYPE_VLAN_IPV4 (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV4_UDP (IAVF_RSS_TYPE_OUTER_IPV4_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV4_TCP (IAVF_RSS_TYPE_OUTER_IPV4_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV4_SCTP (IAVF_RSS_TYPE_OUTER_IPV4_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* VLAN IPv6 */
#define IAVF_RSS_TYPE_VLAN_IPV6 (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6_FRAG | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_SCTP (IAVF_RSS_TYPE_OUTER_IPV6_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* IPv4 inner */
-#define IAVF_RSS_TYPE_INNER_IPV4 ETH_RSS_IPV4
-#define IAVF_RSS_TYPE_INNER_IPV4_UDP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV4_TCP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV4_SCTP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV4 RTE_ETH_RSS_IPV4
+#define IAVF_RSS_TYPE_INNER_IPV4_UDP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV4_TCP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV4_SCTP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
/* IPv6 inner */
-#define IAVF_RSS_TYPE_INNER_IPV6 ETH_RSS_IPV6
-#define IAVF_RSS_TYPE_INNER_IPV6_UDP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV6_TCP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV6_SCTP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV6 RTE_ETH_RSS_IPV6
+#define IAVF_RSS_TYPE_INNER_IPV6_UDP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV6_TCP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV6_SCTP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
/* GTPU IPv4 */
#define IAVF_RSS_TYPE_GTPU_IPV4 (IAVF_RSS_TYPE_INNER_IPV4 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV4_UDP (IAVF_RSS_TYPE_INNER_IPV4_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV4_TCP (IAVF_RSS_TYPE_INNER_IPV4_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* GTPU IPv6 */
#define IAVF_RSS_TYPE_GTPU_IPV6 (IAVF_RSS_TYPE_INNER_IPV6 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV6_UDP (IAVF_RSS_TYPE_INNER_IPV6_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV6_TCP (IAVF_RSS_TYPE_INNER_IPV6_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* ESP, AH, L2TPV3 and PFCP */
-#define IAVF_RSS_TYPE_IPV4_ESP (ETH_RSS_ESP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV4_AH (ETH_RSS_AH | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_ESP (ETH_RSS_ESP | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV6_AH (ETH_RSS_AH | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV4_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV6_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
/**
* Supported pattern for hash.
@@ -435,7 +435,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_vlan_ipv4_udp, IAVF_RSS_TYPE_VLAN_IPV4_UDP, &outer_ipv4_udp_tmplt},
{iavf_pattern_eth_vlan_ipv4_tcp, IAVF_RSS_TYPE_VLAN_IPV4_TCP, &outer_ipv4_tcp_tmplt},
{iavf_pattern_eth_vlan_ipv4_sctp, IAVF_RSS_TYPE_VLAN_IPV4_SCTP, &outer_ipv4_sctp_tmplt},
- {iavf_pattern_eth_ipv4_gtpu, ETH_RSS_IPV4, &outer_ipv4_udp_tmplt},
+ {iavf_pattern_eth_ipv4_gtpu, RTE_ETH_RSS_IPV4, &outer_ipv4_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv4, IAVF_RSS_TYPE_GTPU_IPV4, &inner_ipv4_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv4_udp, IAVF_RSS_TYPE_GTPU_IPV4_UDP, &inner_ipv4_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv4_tcp, IAVF_RSS_TYPE_GTPU_IPV4_TCP, &inner_ipv4_tcp_tmplt},
@@ -477,9 +477,9 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_ipv4_ah, IAVF_RSS_TYPE_IPV4_AH, &ipv4_ah_tmplt},
{iavf_pattern_eth_ipv4_l2tpv3, IAVF_RSS_TYPE_IPV4_L2TPV3, &ipv4_l2tpv3_tmplt},
{iavf_pattern_eth_ipv4_pfcp, IAVF_RSS_TYPE_IPV4_PFCP, &ipv4_pfcp_tmplt},
- {iavf_pattern_eth_ipv4_gtpc, ETH_RSS_IPV4, &ipv4_udp_gtpc_tmplt},
- {iavf_pattern_eth_ecpri, ETH_RSS_ECPRI, ð_ecpri_tmplt},
- {iavf_pattern_eth_ipv4_ecpri, ETH_RSS_ECPRI, &ipv4_ecpri_tmplt},
+ {iavf_pattern_eth_ipv4_gtpc, RTE_ETH_RSS_IPV4, &ipv4_udp_gtpc_tmplt},
+ {iavf_pattern_eth_ecpri, RTE_ETH_RSS_ECPRI, ð_ecpri_tmplt},
+ {iavf_pattern_eth_ipv4_ecpri, RTE_ETH_RSS_ECPRI, &ipv4_ecpri_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv4, IAVF_RSS_TYPE_INNER_IPV4, &inner_ipv4_tmplt},
{iavf_pattern_eth_ipv6_gre_ipv4, IAVF_RSS_TYPE_INNER_IPV4, &inner_ipv4_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv4_tcp, IAVF_RSS_TYPE_INNER_IPV4_TCP, &inner_ipv4_tcp_tmplt},
@@ -497,7 +497,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_vlan_ipv6_udp, IAVF_RSS_TYPE_VLAN_IPV6_UDP, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_vlan_ipv6_tcp, IAVF_RSS_TYPE_VLAN_IPV6_TCP, &outer_ipv6_tcp_tmplt},
{iavf_pattern_eth_vlan_ipv6_sctp, IAVF_RSS_TYPE_VLAN_IPV6_SCTP, &outer_ipv6_sctp_tmplt},
- {iavf_pattern_eth_ipv6_gtpu, ETH_RSS_IPV6, &outer_ipv6_udp_tmplt},
+ {iavf_pattern_eth_ipv6_gtpu, RTE_ETH_RSS_IPV6, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv6, IAVF_RSS_TYPE_GTPU_IPV6, &inner_ipv6_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv6_udp, IAVF_RSS_TYPE_GTPU_IPV6_UDP, &inner_ipv6_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv6_tcp, IAVF_RSS_TYPE_GTPU_IPV6_TCP, &inner_ipv6_tcp_tmplt},
@@ -539,7 +539,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_ipv6_ah, IAVF_RSS_TYPE_IPV6_AH, &ipv6_ah_tmplt},
{iavf_pattern_eth_ipv6_l2tpv3, IAVF_RSS_TYPE_IPV6_L2TPV3, &ipv6_l2tpv3_tmplt},
{iavf_pattern_eth_ipv6_pfcp, IAVF_RSS_TYPE_IPV6_PFCP, &ipv6_pfcp_tmplt},
- {iavf_pattern_eth_ipv6_gtpc, ETH_RSS_IPV6, &ipv6_udp_gtpc_tmplt},
+ {iavf_pattern_eth_ipv6_gtpc, RTE_ETH_RSS_IPV6, &ipv6_udp_gtpc_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv6, IAVF_RSS_TYPE_INNER_IPV6, &inner_ipv6_tmplt},
{iavf_pattern_eth_ipv6_gre_ipv6, IAVF_RSS_TYPE_INNER_IPV6, &inner_ipv6_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv6_tcp, IAVF_RSS_TYPE_INNER_IPV6_TCP, &inner_ipv6_tcp_tmplt},
@@ -573,57 +573,57 @@ iavf_rss_hash_set(struct iavf_adapter *ad, uint64_t rss_hf, bool add)
struct virtchnl_rss_cfg rss_cfg;
#define IAVF_RSS_HF_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
rss_cfg.rss_algorithm = VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC;
- if (rss_hf & ETH_RSS_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_IPV4) {
rss_cfg.proto_hdrs = inner_ipv4_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
rss_cfg.proto_hdrs = inner_ipv4_udp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
rss_cfg.proto_hdrs = inner_ipv4_tcp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
rss_cfg.proto_hdrs = inner_ipv4_sctp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_IPV6) {
rss_cfg.proto_hdrs = inner_ipv6_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
rss_cfg.proto_hdrs = inner_ipv6_udp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
rss_cfg.proto_hdrs = inner_ipv6_tcp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
rss_cfg.proto_hdrs = inner_ipv6_sctp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_FRAG_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_FRAG_IPV4) {
struct virtchnl_proto_hdrs hdr = {
.tunnel_level = TUNNEL_LEVEL_OUTER,
.count = 3,
@@ -641,7 +641,7 @@ iavf_rss_hash_set(struct iavf_adapter *ad, uint64_t rss_hf, bool add)
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_FRAG_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_FRAG_IPV6) {
struct virtchnl_proto_hdrs hdr = {
.tunnel_level = TUNNEL_LEVEL_OUTER,
.count = 3,
@@ -804,28 +804,28 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
hdr = &proto_hdrs->proto_hdr[i];
switch (hdr->type) {
case VIRTCHNL_PROTO_HDR_ETH:
- if (!(rss_type & ETH_RSS_ETH))
+ if (!(rss_type & RTE_ETH_RSS_ETH))
hdr->field_selector = 0;
- else if (rss_type & ETH_RSS_L2_SRC_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
REFINE_PROTO_FLD(DEL, ETH_DST);
- else if (rss_type & ETH_RSS_L2_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
REFINE_PROTO_FLD(DEL, ETH_SRC);
break;
case VIRTCHNL_PROTO_HDR_IPV4:
if (rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV4) {
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
iavf_hash_add_fragment_hdr(proto_hdrs, i + 1);
- } else if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_DST);
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_SRC);
} else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY)) {
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY)) {
REFINE_PROTO_FLD(DEL, IPV4_DST);
REFINE_PROTO_FLD(DEL, IPV4_SRC);
}
@@ -835,11 +835,11 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_IPV4_FRAG:
if (rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV4)
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV4)
REFINE_PROTO_FLD(ADD, IPV4_FRAG_PKID);
} else {
hdr->field_selector = 0;
@@ -847,17 +847,17 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_IPV6:
if (rss_type &
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
REFINE_PROTO_FLD(DEL, IPV6_DST);
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
REFINE_PROTO_FLD(DEL, IPV6_SRC);
} else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY)) {
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY)) {
REFINE_PROTO_FLD(DEL, IPV6_DST);
REFINE_PROTO_FLD(DEL, IPV6_SRC);
}
@@ -874,7 +874,7 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
}
break;
case VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG:
- if (rss_type & ETH_RSS_FRAG_IPV6)
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
REFINE_PROTO_FLD(ADD, IPV6_EH_FRAG_PKID);
else
hdr->field_selector = 0;
@@ -882,15 +882,15 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_UDP:
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
REFINE_PROTO_FLD(DEL, UDP_DST_PORT);
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
REFINE_PROTO_FLD(DEL, UDP_SRC_PORT);
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
hdr->field_selector = 0;
} else {
hdr->field_selector = 0;
@@ -898,15 +898,15 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_TCP:
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
REFINE_PROTO_FLD(DEL, TCP_DST_PORT);
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
REFINE_PROTO_FLD(DEL, TCP_SRC_PORT);
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
hdr->field_selector = 0;
} else {
hdr->field_selector = 0;
@@ -914,46 +914,46 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_SCTP:
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_SCTP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
REFINE_PROTO_FLD(DEL, SCTP_DST_PORT);
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
REFINE_PROTO_FLD(DEL, SCTP_SRC_PORT);
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
hdr->field_selector = 0;
} else {
hdr->field_selector = 0;
}
break;
case VIRTCHNL_PROTO_HDR_S_VLAN:
- if (!(rss_type & ETH_RSS_S_VLAN))
+ if (!(rss_type & RTE_ETH_RSS_S_VLAN))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_C_VLAN:
- if (!(rss_type & ETH_RSS_C_VLAN))
+ if (!(rss_type & RTE_ETH_RSS_C_VLAN))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_L2TPV3:
- if (!(rss_type & ETH_RSS_L2TPV3))
+ if (!(rss_type & RTE_ETH_RSS_L2TPV3))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_ESP:
- if (!(rss_type & ETH_RSS_ESP))
+ if (!(rss_type & RTE_ETH_RSS_ESP))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_AH:
- if (!(rss_type & ETH_RSS_AH))
+ if (!(rss_type & RTE_ETH_RSS_AH))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_PFCP:
- if (!(rss_type & ETH_RSS_PFCP))
+ if (!(rss_type & RTE_ETH_RSS_PFCP))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_ECPRI:
- if (!(rss_type & ETH_RSS_ECPRI))
+ if (!(rss_type & RTE_ETH_RSS_ECPRI))
hdr->field_selector = 0;
break;
default:
@@ -970,7 +970,7 @@ iavf_refine_proto_hdrs_gtpu(struct virtchnl_proto_hdrs *proto_hdrs,
struct virtchnl_proto_hdr *hdr;
int i;
- if (!(rss_type & ETH_RSS_GTPU))
+ if (!(rss_type & RTE_ETH_RSS_GTPU))
return;
for (i = 0; i < proto_hdrs->count; i++) {
@@ -1067,10 +1067,10 @@ static void iavf_refine_proto_hdrs(struct virtchnl_proto_hdrs *proto_hdrs,
}
static uint64_t invalid_rss_comb[] = {
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
RTE_ETH_RSS_L3_PRE32 | RTE_ETH_RSS_L3_PRE40 |
RTE_ETH_RSS_L3_PRE48 | RTE_ETH_RSS_L3_PRE56 |
RTE_ETH_RSS_L3_PRE96
@@ -1081,27 +1081,27 @@ struct rss_attr_type {
uint64_t type;
};
-#define VALID_RSS_IPV4_L4 (ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4 (RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
-#define VALID_RSS_IPV6_L4 (ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4 (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
-#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4 (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6 (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
VALID_RSS_IPV6_L4)
#define VALID_RSS_L3 (VALID_RSS_IPV4 | VALID_RSS_IPV6)
#define VALID_RSS_L4 (VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
-#define VALID_RSS_ATTR (ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY | \
- ETH_RSS_L2_SRC_ONLY | \
- ETH_RSS_L2_DST_ONLY | \
+#define VALID_RSS_ATTR (RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY | \
+ RTE_ETH_RSS_L2_SRC_ONLY | \
+ RTE_ETH_RSS_L2_DST_ONLY | \
RTE_ETH_RSS_L3_PRE64)
#define INVALID_RSS_ATTR (RTE_ETH_RSS_L3_PRE32 | \
@@ -1111,9 +1111,9 @@ struct rss_attr_type {
RTE_ETH_RSS_L3_PRE96)
static struct rss_attr_type rss_attr_to_valid_type[] = {
- {ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY, ETH_RSS_ETH},
- {ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
- {ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
+ {RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY, RTE_ETH_RSS_ETH},
+ {RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
+ {RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
/* current ipv6 prefix only supports prefix 64 bits*/
{RTE_ETH_RSS_L3_PRE64, VALID_RSS_IPV6},
{INVALID_RSS_ATTR, 0}
@@ -1130,15 +1130,15 @@ iavf_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
* hash function.
*/
if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
- if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
- ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+ if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
return true;
if (!(rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
- ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
return true;
}
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index e33fe4576b6e..4ff856fc82aa 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -609,7 +609,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
rxq->vsi = vsi;
rxq->offloads = offloads;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index e210b913d633..096be81e8a69 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -24,22 +24,22 @@
#define IAVF_VPMD_TX_MAX_FREE_BUF 64
#define IAVF_TX_NO_VECTOR_FLAGS ( \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_TCP_TSO)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
#define IAVF_TX_VECTOR_OFFLOAD ( \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define IAVF_RX_VECTOR_OFFLOAD ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_VLAN | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define IAVF_VECTOR_PATH 0
#define IAVF_VECTOR_OFFLOAD_PATH 1
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 475070e036ef..8f9a397e4143 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -904,7 +904,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH ||
+ RTE_ETH_RX_OFFLOAD_RSS_HASH ||
rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
@@ -957,7 +957,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
raw_desc_bh1, 1);
if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/**
* to shift the 32b RSS hash value to the
* highest 32b of each 128b before mask
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 571161c0cdec..2329928c62cb 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1138,7 +1138,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH ||
+ RTE_ETH_RX_OFFLOAD_RSS_HASH ||
rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
@@ -1191,7 +1191,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
raw_desc_bh1, 1);
if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/**
* to shift the 32b RSS hash value to the
* highest 32b of each 128b before mask
@@ -1719,7 +1719,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
txep = (void *)txq->sw_ring;
txep += txq->next_dd - (n - 1);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
rte_lcore_id());
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index ee1e9055259b..58f928bdd7ca 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -818,7 +818,7 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh3 =
_mm_load_si128
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 4c2e0c7216fd..ec53478083b4 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -807,7 +807,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
PMD_DRV_LOG(DEBUG, "RSS is not supported");
return -ENOTSUP;
}
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
/* set all lut items to default queue */
memset(hw->rss_lut, 0, hw->vf_res->rss_lut_size);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index cab7c4da8759..6226aa5a80c2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -66,7 +66,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
if (max_pkt_len <= ICE_ETH_MAX_LEN ||
max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -89,7 +89,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
}
rxq->max_pkt_len = max_pkt_len;
- if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
(rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size) {
dev_data->scattered_rx = 1;
}
@@ -559,7 +559,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -620,7 +620,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
}
ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
ad->pf.adapter_stopped = 1;
return 0;
@@ -635,8 +635,8 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
ad->rx_bulk_alloc_allowed = true;
ad->tx_simple_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
return 0;
}
@@ -658,28 +658,28 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -896,42 +896,42 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
*/
switch (hw->link_speed) {
case 10:
- new_link.link_speed = ETH_SPEED_NUM_10M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case 100:
- new_link.link_speed = ETH_SPEED_NUM_100M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case 1000:
- new_link.link_speed = ETH_SPEED_NUM_1G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case 10000:
- new_link.link_speed = ETH_SPEED_NUM_10G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case 20000:
- new_link.link_speed = ETH_SPEED_NUM_20G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case 25000:
- new_link.link_speed = ETH_SPEED_NUM_25G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case 40000:
- new_link.link_speed = ETH_SPEED_NUM_40G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case 50000:
- new_link.link_speed = ETH_SPEED_NUM_50G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case 100000:
- new_link.link_speed = ETH_SPEED_NUM_100G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- new_link.link_speed = ETH_SPEED_NUM_NONE;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
- new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
- new_link.link_status = hw->link_up ? ETH_LINK_UP :
- ETH_LINK_DOWN;
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ new_link.link_status = hw->link_up ? RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN;
new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
return rte_eth_linkstatus_set(dev, &new_link);
}
@@ -950,11 +950,11 @@ ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ice_create_tunnel(parent_hw, TNL_VXLAN,
udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_ECPRI:
+ case RTE_ETH_TUNNEL_TYPE_ECPRI:
ret = ice_create_tunnel(parent_hw, TNL_ECPRI,
udp_tunnel->udp_port);
break;
@@ -981,8 +981,8 @@ ice_dcf_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
- case RTE_TUNNEL_TYPE_ECPRI:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_ECPRI:
ret = ice_destroy_tunnel(parent_hw, udp_tunnel->udp_port, 0);
break;
default:
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 970461f3e90a..0dac1b92bfdb 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -37,7 +37,7 @@ ice_dcf_vf_repr_dev_configure(struct rte_eth_dev *dev)
static int
ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -45,7 +45,7 @@ ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
static int
ice_dcf_vf_repr_dev_stop(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -135,29 +135,29 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -239,9 +239,9 @@ ice_dcf_vf_repr_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return -ENOTSUP;
/* Vlan stripping setting */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
bool enable = !!(dev_conf->rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP);
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
if (enable && repr->outer_vlan_info.port_vlan_ena) {
PMD_DRV_LOG(ERR,
@@ -338,7 +338,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
if (!ice_dcf_vlan_offload_ena(repr))
return -ENOTSUP;
- if (vlan_type != ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
PMD_DRV_LOG(ERR,
"Can accelerate only outer VLAN in QinQ\n");
return -EINVAL;
@@ -368,7 +368,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
if (repr->outer_vlan_info.stripping_ena) {
err = ice_dcf_vf_repr_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK);
+ RTE_ETH_VLAN_STRIP_MASK);
if (err) {
PMD_DRV_LOG(ERR,
"Failed to reset VLAN stripping : %d\n",
@@ -441,7 +441,7 @@ ice_dcf_vf_repr_init_vlan(struct rte_eth_dev *vf_rep_eth_dev)
int err;
err = ice_dcf_vf_repr_vlan_offload_set(vf_rep_eth_dev,
- ETH_VLAN_STRIP_MASK);
+ RTE_ETH_VLAN_STRIP_MASK);
if (err) {
PMD_DRV_LOG(ERR, "Failed to set VLAN offload");
return err;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a4cd39c954f1..459718ad33f6 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1449,9 +1449,9 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
TAILQ_INIT(&vsi->mac_list);
TAILQ_INIT(&vsi->vlan_list);
- /* Be sync with ETH_RSS_RETA_SIZE_x maximum value definition */
+ /* Be sync with RTE_ETH_RSS_RETA_SIZE_x maximum value definition */
pf->hash_lut_size = hw->func_caps.common_cap.rss_table_size >
- ETH_RSS_RETA_SIZE_512 ? ETH_RSS_RETA_SIZE_512 :
+ RTE_ETH_RSS_RETA_SIZE_512 ? RTE_ETH_RSS_RETA_SIZE_512 :
hw->func_caps.common_cap.rss_table_size;
pf->flags |= ICE_FLAG_RSS_AQ_CAPABLE;
@@ -2809,16 +2809,16 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
int ret;
#define ICE_RSS_HF_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_FRAG_IPV6)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV6)
ret = ice_rem_vsi_rss_cfg(hw, vsi->idx);
if (ret)
@@ -2828,7 +2828,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
cfg.symm = 0;
cfg.hdr_type = ICE_RSS_OUTER_HEADERS;
/* Configure RSS for IPv4 with src/dst addr as input set */
- if (rss_hf & ETH_RSS_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_IPV4) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV4;
ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -2838,7 +2838,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for IPv6 with src/dst addr as input set */
- if (rss_hf & ETH_RSS_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_IPV6) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV6;
ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -2848,7 +2848,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for udp4 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -2859,7 +2859,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for udp6 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -2870,7 +2870,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for tcp4 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -2881,7 +2881,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for tcp6 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -2892,7 +2892,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for sctp4 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_SCTP_IPV4;
@@ -2903,7 +2903,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for sctp6 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_SCTP_IPV6;
@@ -2913,7 +2913,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_IPV4) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV4;
@@ -2923,7 +2923,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_IPV6) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV6;
@@ -2933,7 +2933,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -2943,7 +2943,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -2953,7 +2953,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -2963,7 +2963,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -2973,7 +2973,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_FRAG_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_FRAG_IPV4) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_FRAG;
cfg.hash_flds = ICE_FLOW_HASH_IPV4 | BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID);
ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -2982,7 +2982,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_FRAG_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_FRAG_IPV6) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_FRAG;
cfg.hash_flds = ICE_FLOW_HASH_IPV6 | BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_ID);
ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3124,8 +3124,8 @@ ice_dev_configure(struct rte_eth_dev *dev)
ad->rx_bulk_alloc_allowed = true;
ad->tx_simple_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (dev->data->nb_rx_queues) {
ret = ice_init_rss(pf);
@@ -3344,8 +3344,8 @@ ice_dev_start(struct rte_eth_dev *dev)
ice_set_rx_function(dev);
ice_set_tx_function(dev);
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = ice_vlan_offload_set(dev, mask);
if (ret) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
@@ -3449,40 +3449,40 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->flow_type_rss_offloads = 0;
if (!is_safe_mode) {
dev_info->rx_offload_capa |=
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa |=
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL;
}
dev_info->rx_queue_offload_capa = 0;
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->reta_size = pf->hash_lut_size;
dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
@@ -3521,24 +3521,24 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_align = ICE_ALIGN_RING_DESC,
};
- dev_info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_5G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_20G |
- ETH_LINK_SPEED_25G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_5G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_20G |
+ RTE_ETH_LINK_SPEED_25G;
phy_type_low = hw->port_info->phy.phy_type_low;
phy_type_high = hw->port_info->phy.phy_type_high;
if (ICE_PHY_TYPE_SUPPORT_50G(phy_type_low))
- dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (ICE_PHY_TYPE_SUPPORT_100G_LOW(phy_type_low) ||
ICE_PHY_TYPE_SUPPORT_100G_HIGH(phy_type_high))
- dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
dev_info->nb_rx_queues = dev->data->nb_rx_queues;
dev_info->nb_tx_queues = dev->data->nb_tx_queues;
@@ -3603,8 +3603,8 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
status = ice_aq_get_link_info(hw->port_info, enable_lse,
&link_status, NULL);
if (status != ICE_SUCCESS) {
- link.link_speed = ETH_SPEED_NUM_100M;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR, "Failed to get link info");
goto out;
}
@@ -3620,55 +3620,55 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
goto out;
/* Full-duplex operation at all supported speeds */
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
/* Parse the link status */
switch (link_status.link_speed) {
case ICE_AQ_LINK_SPEED_10MB:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case ICE_AQ_LINK_SPEED_100MB:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case ICE_AQ_LINK_SPEED_1000MB:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case ICE_AQ_LINK_SPEED_2500MB:
- link.link_speed = ETH_SPEED_NUM_2_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case ICE_AQ_LINK_SPEED_5GB:
- link.link_speed = ETH_SPEED_NUM_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case ICE_AQ_LINK_SPEED_10GB:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case ICE_AQ_LINK_SPEED_20GB:
- link.link_speed = ETH_SPEED_NUM_20G;
+ link.link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case ICE_AQ_LINK_SPEED_25GB:
- link.link_speed = ETH_SPEED_NUM_25G;
+ link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case ICE_AQ_LINK_SPEED_40GB:
- link.link_speed = ETH_SPEED_NUM_40G;
+ link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case ICE_AQ_LINK_SPEED_50GB:
- link.link_speed = ETH_SPEED_NUM_50G;
+ link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case ICE_AQ_LINK_SPEED_100GB:
- link.link_speed = ETH_SPEED_NUM_100G;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
case ICE_AQ_LINK_SPEED_UNKNOWN:
PMD_DRV_LOG(ERR, "Unknown link speed");
- link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
default:
PMD_DRV_LOG(ERR, "None link speed");
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
out:
ice_atomic_write_link_status(dev, &link);
@@ -3767,10 +3767,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (frame_size > ICE_ETH_MAX_LEN)
dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -4161,15 +4161,15 @@ ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ice_vsi_config_vlan_filter(vsi, true);
else
ice_vsi_config_vlan_filter(vsi, false);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
ice_vsi_config_vlan_stripping(vsi, true);
else
ice_vsi_config_vlan_stripping(vsi, false);
@@ -4284,8 +4284,8 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
lut[i] = reta_conf[idx].reta[shift];
}
@@ -4334,8 +4334,8 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
goto out;
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift))
reta_conf[idx].reta[shift] = lut[i];
}
@@ -5244,7 +5244,7 @@ ice_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ice_create_tunnel(hw, TNL_VXLAN, udp_tunnel->udp_port);
break;
default:
@@ -5268,7 +5268,7 @@ ice_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ice_destroy_tunnel(hw, udp_tunnel->udp_port, 0);
break;
default:
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index b4bf651c1c7f..1c4bc4e30349 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -115,19 +115,19 @@
ICE_FLAG_VF_MAC_BY_PF)
#define ICE_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L2_PAYLOAD)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L2_PAYLOAD)
/**
* The overhead from MTU to max frame size.
diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index 54d14dfcddfb..beb863f70568 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -39,27 +39,27 @@
#define ICE_IPV4_PROT BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_PROT)
#define ICE_IPV6_PROT BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PROT)
-#define VALID_RSS_IPV4_L4 (ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4 (RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
-#define VALID_RSS_IPV6_L4 (ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4 (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
-#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4 (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6 (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
VALID_RSS_IPV6_L4)
#define VALID_RSS_L3 (VALID_RSS_IPV4 | VALID_RSS_IPV6)
#define VALID_RSS_L4 (VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
-#define VALID_RSS_ATTR (ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY | \
- ETH_RSS_L2_SRC_ONLY | \
- ETH_RSS_L2_DST_ONLY | \
+#define VALID_RSS_ATTR (RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY | \
+ RTE_ETH_RSS_L2_SRC_ONLY | \
+ RTE_ETH_RSS_L2_DST_ONLY | \
RTE_ETH_RSS_L3_PRE32 | \
RTE_ETH_RSS_L3_PRE48 | \
RTE_ETH_RSS_L3_PRE64)
@@ -373,80 +373,80 @@ struct ice_rss_hash_cfg eth_tmplt = {
};
/* IPv4 */
-#define ICE_RSS_TYPE_ETH_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4)
+#define ICE_RSS_TYPE_ETH_IPV4 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4)
#define ICE_RSS_TYPE_ETH_IPV4_UDP (ICE_RSS_TYPE_ETH_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP)
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
#define ICE_RSS_TYPE_ETH_IPV4_TCP (ICE_RSS_TYPE_ETH_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP)
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP)
#define ICE_RSS_TYPE_ETH_IPV4_SCTP (ICE_RSS_TYPE_ETH_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
-#define ICE_RSS_TYPE_IPV4 ETH_RSS_IPV4
-#define ICE_RSS_TYPE_IPV4_UDP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP)
-#define ICE_RSS_TYPE_IPV4_TCP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP)
-#define ICE_RSS_TYPE_IPV4_SCTP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
+#define ICE_RSS_TYPE_IPV4 RTE_ETH_RSS_IPV4
+#define ICE_RSS_TYPE_IPV4_UDP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define ICE_RSS_TYPE_IPV4_TCP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define ICE_RSS_TYPE_IPV4_SCTP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
/* IPv6 */
-#define ICE_RSS_TYPE_ETH_IPV6 (ETH_RSS_ETH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_ETH_IPV6_FRAG (ETH_RSS_ETH | ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6_FRAG (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6)
#define ICE_RSS_TYPE_ETH_IPV6_UDP (ICE_RSS_TYPE_ETH_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define ICE_RSS_TYPE_ETH_IPV6_TCP (ICE_RSS_TYPE_ETH_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
#define ICE_RSS_TYPE_ETH_IPV6_SCTP (ICE_RSS_TYPE_ETH_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
-#define ICE_RSS_TYPE_IPV6 ETH_RSS_IPV6
-#define ICE_RSS_TYPE_IPV6_UDP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP)
-#define ICE_RSS_TYPE_IPV6_TCP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
-#define ICE_RSS_TYPE_IPV6_SCTP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+#define ICE_RSS_TYPE_IPV6 RTE_ETH_RSS_IPV6
+#define ICE_RSS_TYPE_IPV6_UDP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define ICE_RSS_TYPE_IPV6_TCP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define ICE_RSS_TYPE_IPV6_SCTP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
/* VLAN IPV4 */
#define ICE_RSS_TYPE_VLAN_IPV4 (ICE_RSS_TYPE_IPV4 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
- ETH_RSS_FRAG_IPV4)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+ RTE_ETH_RSS_FRAG_IPV4)
#define ICE_RSS_TYPE_VLAN_IPV4_UDP (ICE_RSS_TYPE_IPV4_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV4_TCP (ICE_RSS_TYPE_IPV4_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV4_SCTP (ICE_RSS_TYPE_IPV4_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* VLAN IPv6 */
#define ICE_RSS_TYPE_VLAN_IPV6 (ICE_RSS_TYPE_IPV6 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV6_FRAG (ICE_RSS_TYPE_IPV6 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
- ETH_RSS_FRAG_IPV6)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+ RTE_ETH_RSS_FRAG_IPV6)
#define ICE_RSS_TYPE_VLAN_IPV6_UDP (ICE_RSS_TYPE_IPV6_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV6_TCP (ICE_RSS_TYPE_IPV6_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV6_SCTP (ICE_RSS_TYPE_IPV6_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* GTPU IPv4 */
#define ICE_RSS_TYPE_GTPU_IPV4 (ICE_RSS_TYPE_IPV4 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV4_UDP (ICE_RSS_TYPE_IPV4_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV4_TCP (ICE_RSS_TYPE_IPV4_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* GTPU IPv6 */
#define ICE_RSS_TYPE_GTPU_IPV6 (ICE_RSS_TYPE_IPV6 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV6_UDP (ICE_RSS_TYPE_IPV6_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV6_TCP (ICE_RSS_TYPE_IPV6_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* PPPOE */
-#define ICE_RSS_TYPE_PPPOE (ETH_RSS_ETH | ETH_RSS_PPPOE)
+#define ICE_RSS_TYPE_PPPOE (RTE_ETH_RSS_ETH | RTE_ETH_RSS_PPPOE)
/* PPPOE IPv4 */
#define ICE_RSS_TYPE_PPPOE_IPV4 (ICE_RSS_TYPE_IPV4 | \
@@ -465,17 +465,17 @@ struct ice_rss_hash_cfg eth_tmplt = {
ICE_RSS_TYPE_PPPOE)
/* ESP, AH, L2TPV3 and PFCP */
-#define ICE_RSS_TYPE_IPV4_ESP (ETH_RSS_ESP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_ESP (ETH_RSS_ESP | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_AH (ETH_RSS_AH | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_AH (ETH_RSS_AH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
/* MAC */
-#define ICE_RSS_TYPE_ETH ETH_RSS_ETH
+#define ICE_RSS_TYPE_ETH RTE_ETH_RSS_ETH
/**
* Supported pattern for hash.
@@ -640,51 +640,51 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
uint64_t *hash_flds = &hash_cfg->hash_flds;
if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH) {
- if (!(rss_type & ETH_RSS_ETH))
+ if (!(rss_type & RTE_ETH_RSS_ETH))
*hash_flds &= ~ICE_FLOW_HASH_ETH;
- if (rss_type & ETH_RSS_L2_SRC_ONLY)
+ if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_DA));
- else if (rss_type & ETH_RSS_L2_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_SA));
*addl_hdrs &= ~ICE_FLOW_SEG_HDR_ETH;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH_NON_IP) {
- if (rss_type & ETH_RSS_ETH)
+ if (rss_type & RTE_ETH_RSS_ETH)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_TYPE);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_VLAN) {
- if (rss_type & ETH_RSS_C_VLAN)
+ if (rss_type & RTE_ETH_RSS_C_VLAN)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_C_VLAN);
- else if (rss_type & ETH_RSS_S_VLAN)
+ else if (rss_type & RTE_ETH_RSS_S_VLAN)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_S_VLAN);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_PPPOE) {
- if (!(rss_type & ETH_RSS_PPPOE))
+ if (!(rss_type & RTE_ETH_RSS_PPPOE))
*hash_flds &= ~ICE_FLOW_HASH_PPPOE_SESS_ID;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV4) {
if (rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV4) {
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
*addl_hdrs |= ICE_FLOW_SEG_HDR_IPV_FRAG;
*addl_hdrs &= ~(ICE_FLOW_SEG_HDR_IPV_OTHER);
*hash_flds |=
BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID);
}
- if (rss_type & ETH_RSS_L3_SRC_ONLY)
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_DA));
- else if (rss_type & ETH_RSS_L3_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_SA));
else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY))
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_IPV4;
} else {
*hash_flds &= ~ICE_FLOW_HASH_IPV4;
@@ -693,30 +693,30 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV6) {
if (rss_type &
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV6)
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
*hash_flds |=
BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_ID);
- if (rss_type & ETH_RSS_L3_SRC_ONLY)
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
- else if (rss_type & ETH_RSS_L3_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY))
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_IPV6;
} else {
*hash_flds &= ~ICE_FLOW_HASH_IPV6;
}
if (rss_type & RTE_ETH_RSS_L3_PRE32) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_SA));
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_DA));
} else {
@@ -725,10 +725,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
}
}
if (rss_type & RTE_ETH_RSS_L3_PRE48) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_SA));
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_DA));
} else {
@@ -737,10 +737,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
}
}
if (rss_type & RTE_ETH_RSS_L3_PRE64) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_SA));
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_DA));
} else {
@@ -752,15 +752,15 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
if (*addl_hdrs & ICE_FLOW_SEG_HDR_UDP) {
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_DST_PORT));
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_SRC_PORT));
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
} else {
*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
@@ -769,15 +769,15 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
if (*addl_hdrs & ICE_FLOW_SEG_HDR_TCP) {
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_DST_PORT));
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_SRC_PORT));
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
} else {
*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
@@ -786,15 +786,15 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
if (*addl_hdrs & ICE_FLOW_SEG_HDR_SCTP) {
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_SCTP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_DST_PORT));
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT));
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
} else {
*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
@@ -802,22 +802,22 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_L2TPV3) {
- if (!(rss_type & ETH_RSS_L2TPV3))
+ if (!(rss_type & RTE_ETH_RSS_L2TPV3))
*hash_flds &= ~ICE_FLOW_HASH_L2TPV3_SESS_ID;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_ESP) {
- if (!(rss_type & ETH_RSS_ESP))
+ if (!(rss_type & RTE_ETH_RSS_ESP))
*hash_flds &= ~ICE_FLOW_HASH_ESP_SPI;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_AH) {
- if (!(rss_type & ETH_RSS_AH))
+ if (!(rss_type & RTE_ETH_RSS_AH))
*hash_flds &= ~ICE_FLOW_HASH_AH_SPI;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_PFCP_SESSION) {
- if (!(rss_type & ETH_RSS_PFCP))
+ if (!(rss_type & RTE_ETH_RSS_PFCP))
*hash_flds &= ~ICE_FLOW_HASH_PFCP_SEID;
}
}
@@ -851,7 +851,7 @@ ice_refine_hash_cfg_gtpu(struct ice_rss_hash_cfg *hash_cfg,
uint64_t *hash_flds = &hash_cfg->hash_flds;
/* update hash field for gtpu eh/gtpu dwn/gtpu up. */
- if (!(rss_type & ETH_RSS_GTPU))
+ if (!(rss_type & RTE_ETH_RSS_GTPU))
return;
if (*addl_hdrs & ICE_FLOW_SEG_HDR_GTPU_DWN)
@@ -873,10 +873,10 @@ static void ice_refine_hash_cfg(struct ice_rss_hash_cfg *hash_cfg,
}
static uint64_t invalid_rss_comb[] = {
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
RTE_ETH_RSS_L3_PRE40 |
RTE_ETH_RSS_L3_PRE56 |
RTE_ETH_RSS_L3_PRE96
@@ -888,9 +888,9 @@ struct rss_attr_type {
};
static struct rss_attr_type rss_attr_to_valid_type[] = {
- {ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY, ETH_RSS_ETH},
- {ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
- {ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
+ {RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY, RTE_ETH_RSS_ETH},
+ {RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
+ {RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
/* current ipv6 prefix only supports prefix 64 bits*/
{RTE_ETH_RSS_L3_PRE32, VALID_RSS_IPV6},
{RTE_ETH_RSS_L3_PRE48, VALID_RSS_IPV6},
@@ -909,16 +909,16 @@ ice_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
* hash function.
*/
if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
- if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
- ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+ if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
return true;
if (!(rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
- ETH_RSS_FRAG_IPV4 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
return true;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5d7ab4f047ee..63c07e001f07 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -280,7 +280,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
dev_data->dev_conf.rxmode.max_rx_pkt_len);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
@@ -1103,7 +1103,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = vsi->base_queue + queue_idx;
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -2780,7 +2780,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
for (i = 0; i < txq->tx_rs_thresh; i++)
rte_prefetch0((txep + i)->mbuf);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
rte_mempool_put(txep->mbuf->pool, txep->mbuf);
txep->mbuf = NULL;
@@ -3254,7 +3254,7 @@ ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq)
/* Use a simple Tx queue if possible (only fast free is allowed) */
ad->tx_simple_allowed =
(txq->offloads ==
- (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+ (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
txq->tx_rs_thresh >= ICE_TX_MAX_BURST);
if (ad->tx_simple_allowed)
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 9725ac018043..8c870354619e 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -473,7 +473,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
_mm_load_si128
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 5bba9887d296..6d2038975830 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -584,7 +584,7 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
_mm_load_si128
@@ -994,7 +994,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
txep = (void *)txq->sw_ring;
txep += txq->tx_next_dd - (n - 1);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
void **cache_objs;
struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 2d8ef7dc8a93..a5b573c22da2 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -248,23 +248,23 @@ ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
}
#define ICE_TX_NO_VECTOR_FLAGS ( \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
#define ICE_TX_VECTOR_OFFLOAD ( \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define ICE_RX_VECTOR_OFFLOAD ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_VLAN | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define ICE_VECTOR_PATH 0
#define ICE_VECTOR_OFFLOAD_PATH 1
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 653bd28b417c..117494131f32 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -479,7 +479,7 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh3 =
_mm_load_si128
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 224a0954836b..c75b06cae1fe 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -314,8 +314,8 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (rx_mq_mode != ETH_MQ_RX_NONE &&
- rx_mq_mode != ETH_MQ_RX_RSS) {
+ if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
/* RSS together with VMDq not supported*/
PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
rx_mq_mode);
@@ -325,7 +325,7 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
/* To no break software that set invalid mode, only display
* warning if invalid mode is used.
*/
- if (tx_mq_mode != ETH_MQ_TX_NONE)
+ if (tx_mq_mode != RTE_ETH_MQ_TX_NONE)
PMD_INIT_LOG(WARNING,
"TX mode %d is not supported. Due to meaningless in this driver, just ignore",
tx_mq_mode);
@@ -341,8 +341,8 @@ eth_igc_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
ret = igc_check_mq_mode(dev);
if (ret != 0)
@@ -480,12 +480,12 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
uint16_t duplex, speed;
hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
link.link_duplex = (duplex == FULL_DUPLEX) ?
- ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
+ RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
link.link_speed = speed;
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
if (speed == SPEED_2500) {
uint32_t tipg = IGC_READ_REG(hw, IGC_TIPG);
@@ -497,9 +497,9 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
}
} else {
link.link_speed = 0;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_status = ETH_LINK_DOWN;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -532,7 +532,7 @@ eth_igc_interrupt_action(struct rte_eth_dev *dev)
" Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id,
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
else
PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -979,18 +979,18 @@ eth_igc_start(struct rte_eth_dev *dev)
/* VLAN Offload Settings */
eth_igc_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK);
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK);
/* Setup link speed and duplex */
speeds = &dev->data->dev_conf.link_speeds;
- if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
hw->phy.autoneg_advertised = IGC_ALL_SPEED_DUPLEX_2500;
hw->mac.autoneg = 1;
} else {
int num_speeds = 0;
- if (*speeds & ETH_LINK_SPEED_FIXED) {
+ if (*speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_DRV_LOG(ERR,
"Force speed mode currently not supported");
igc_dev_clear_queues(dev);
@@ -1000,33 +1000,33 @@ eth_igc_start(struct rte_eth_dev *dev)
hw->phy.autoneg_advertised = 0;
hw->mac.autoneg = 1;
- if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G)) {
+ if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G)) {
num_speeds = -1;
goto error_invalid_config;
}
- if (*speeds & ETH_LINK_SPEED_10M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_10M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M) {
hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M) {
hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_1G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_1G) {
hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_2_5G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_2_5G) {
hw->phy.autoneg_advertised |= ADVERTISE_2500_FULL;
num_speeds++;
}
@@ -1490,14 +1490,14 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mac_addrs = hw->mac.rar_entry_count;
dev_info->rx_offload_capa = IGC_RX_OFFLOAD_ALL;
dev_info->tx_offload_capa = IGC_TX_OFFLOAD_ALL;
- dev_info->rx_queue_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
dev_info->max_rx_queues = IGC_QUEUE_PAIRS_NUM;
dev_info->max_tx_queues = IGC_QUEUE_PAIRS_NUM;
dev_info->max_vmdq_pools = 0;
dev_info->hash_key_size = IGC_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = IGC_RSS_OFFLOAD_ALL;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1523,9 +1523,9 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = rx_desc_lim;
dev_info->tx_desc_lim = tx_desc_lim;
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G;
dev_info->max_mtu = dev_info->max_rx_pktlen - IGC_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -1603,11 +1603,11 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
rctl |= IGC_RCTL_LPE;
} else {
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
rctl &= ~IGC_RCTL_LPE;
}
IGC_WRITE_REG(hw, IGC_RCTL, rctl);
@@ -2165,13 +2165,13 @@ eth_igc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
rx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -2203,16 +2203,16 @@ eth_igc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
hw->fc.requested_mode = igc_fc_none;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
hw->fc.requested_mode = igc_fc_rx_pause;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
hw->fc.requested_mode = igc_fc_tx_pause;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
hw->fc.requested_mode = igc_fc_full;
break;
default:
@@ -2258,29 +2258,29 @@ eth_igc_rss_reta_update(struct rte_eth_dev *dev,
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
uint16_t i;
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR,
"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
- reta_size, ETH_RSS_RETA_SIZE_128);
+ reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
- RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+ RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
/* set redirection table */
- for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+ for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
union igc_rss_reta_reg reta, reg;
uint16_t idx, shift;
uint8_t j, mask;
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IGC_RSS_RDT_REG_SIZE_MASK);
/* if no need to update the register */
if (!mask ||
- shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+ shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
continue;
/* check mask whether need to read the register value first */
@@ -2314,29 +2314,29 @@ eth_igc_rss_reta_query(struct rte_eth_dev *dev,
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
uint16_t i;
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR,
"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
- reta_size, ETH_RSS_RETA_SIZE_128);
+ reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
- RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+ RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
/* read redirection table */
- for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+ for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
union igc_rss_reta_reg reta;
uint16_t idx, shift;
uint8_t j, mask;
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IGC_RSS_RDT_REG_SIZE_MASK);
/* if no need to read register */
if (!mask ||
- shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+ shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
continue;
/* read register and get the queue index */
@@ -2393,23 +2393,23 @@ eth_igc_rss_hash_conf_get(struct rte_eth_dev *dev,
rss_hf = 0;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_EX)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP_EX)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP_EX)
- rss_hf |= ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
rss_conf->rss_hf |= rss_hf;
return 0;
@@ -2495,7 +2495,7 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
return 0;
if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) == 0)
goto write_ext_vlan;
/* Update maximum packet length */
@@ -2528,7 +2528,7 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
return 0;
if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) == 0)
goto write_ext_vlan;
/* Update maximum packet length */
@@ -2554,22 +2554,22 @@ eth_igc_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
igc_vlan_hw_strip_enable(dev);
else
igc_vlan_hw_strip_disable(dev);
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
igc_vlan_hw_filter_enable(dev);
else
igc_vlan_hw_filter_disable(dev);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
return igc_vlan_hw_extend_enable(dev);
else
return igc_vlan_hw_extend_disable(dev);
@@ -2587,7 +2587,7 @@ eth_igc_vlan_tpid_set(struct rte_eth_dev *dev,
uint32_t reg_val;
/* only outer TPID of double VLAN can be configured*/
- if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
reg_val = IGC_READ_REG(hw, IGC_VET);
reg_val = (reg_val & (~IGC_VET_EXT)) |
((uint32_t)tpid << IGC_VET_EXT_SHIFT);
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 7b6c209df3b6..066792b8a2d8 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -59,38 +59,38 @@ extern "C" {
#define IGC_TX_MAX_MTU_SEG UINT8_MAX
#define IGC_RX_OFFLOAD_ALL ( \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_VLAN_EXTEND | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_KEEP_CRC | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define IGC_TX_OFFLOAD_ALL ( \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_UDP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_UDP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define IGC_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define IGC_MAX_ETQF_FILTERS 3 /* etqf(3) is used for 1588 */
#define IGC_ETQF_FILTER_1588 3
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd220..82e7e084b41d 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -127,7 +127,7 @@ struct igc_rx_queue {
uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise. */
uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En. */
uint32_t flags; /**< RX flags. */
- uint64_t offloads; /**< offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_RX_OFFLOAD_* */
};
/** Offload features */
@@ -209,7 +209,7 @@ struct igc_tx_queue {
/**< Start context position for transmit queue. */
struct igc_advctx_info ctx_cache[IGC_CTX_NUM];
/**< Hardware context history.*/
- uint64_t offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
};
static inline uint64_t
@@ -866,23 +866,23 @@ igc_hw_rss_hash_set(struct igc_hw *hw, struct rte_eth_rss_conf *rss_conf)
/* Set configured hashing protocols in MRQC register */
rss_hf = rss_conf->rss_hf;
mrqc = IGC_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= IGC_MRQC_RSS_FIELD_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV4_TCP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6;
- if (rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP;
- if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV4_UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP;
- if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP_EX;
IGC_WRITE_REG(hw, IGC_MRQC, mrqc);
}
@@ -1056,10 +1056,10 @@ igc_dev_mq_rx_configure(struct rte_eth_dev *dev)
}
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
igc_rss_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/*
* configure RSS register for following,
* then disable the RSS logic
@@ -1099,7 +1099,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
/* Configure support of jumbo frames, if any. */
- if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
rctl |= IGC_RCTL_LPE;
/*
@@ -1130,7 +1130,7 @@ igc_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure
*/
- rxq->crc_len = (offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+ rxq->crc_len = (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
RTE_ETHER_CRC_LEN : 0;
bus_addr = rxq->rx_ring_phys_addr;
@@ -1196,7 +1196,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RXDCTL(rxq->reg_idx), rxdctl);
}
- if (offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
dev->data->scattered_rx = 1;
if (dev->data->scattered_rx) {
@@ -1240,20 +1240,20 @@ igc_rx_init(struct rte_eth_dev *dev)
rxcsum |= IGC_RXCSUM_PCSD;
/* Enable both L3/L4 rx checksum offload */
- if (offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
rxcsum |= IGC_RXCSUM_IPOFL;
else
rxcsum &= ~IGC_RXCSUM_IPOFL;
if (offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
rxcsum |= IGC_RXCSUM_TUOFL;
- offloads |= DEV_RX_OFFLOAD_SCTP_CKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_SCTP_CKSUM;
} else {
rxcsum &= ~IGC_RXCSUM_TUOFL;
}
- if (offloads & DEV_RX_OFFLOAD_SCTP_CKSUM)
+ if (offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM)
rxcsum |= IGC_RXCSUM_CRCOFL;
else
rxcsum &= ~IGC_RXCSUM_CRCOFL;
@@ -1261,7 +1261,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RXCSUM, rxcsum);
/* Setup the Receive Control Register. */
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rctl &= ~IGC_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
else
rctl |= IGC_RCTL_SECRC; /* Strip Ethernet CRC. */
@@ -1298,12 +1298,12 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RDT(rxq->reg_idx), rxq->nb_rx_desc - 1);
dvmolr = IGC_READ_REG(hw, IGC_DVMOLR(rxq->reg_idx));
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
dvmolr |= IGC_DVMOLR_STRVLAN;
else
dvmolr &= ~IGC_DVMOLR_STRVLAN;
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
dvmolr &= ~IGC_DVMOLR_STRCRC;
else
dvmolr |= IGC_DVMOLR_STRCRC;
@@ -2272,10 +2272,10 @@ eth_igc_vlan_strip_queue_set(struct rte_eth_dev *dev,
reg_val = IGC_READ_REG(hw, IGC_DVMOLR(rx_queue_id));
if (on) {
reg_val |= IGC_DVMOLR_STRVLAN;
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
reg_val &= ~(IGC_DVMOLR_STRVLAN | IGC_DVMOLR_HIDVLAN);
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
IGC_WRITE_REG(hw, IGC_DVMOLR(rx_queue_id), reg_val);
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index e6207939665e..5e7c22c339d1 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -280,37 +280,37 @@ ionic_dev_link_update(struct rte_eth_dev *eth_dev,
memset(&link, 0, sizeof(link));
if (adapter->idev.port_info->config.an_enable) {
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
}
if (!adapter->link_up ||
!(lif->state & IONIC_LIF_F_UP)) {
/* Interface is down */
- link.link_status = ETH_LINK_DOWN;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
} else {
/* Interface is up */
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (adapter->link_speed) {
case 10000:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case 25000:
- link.link_speed = ETH_SPEED_NUM_25G;
+ link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case 40000:
- link.link_speed = ETH_SPEED_NUM_40G;
+ link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case 50000:
- link.link_speed = ETH_SPEED_NUM_50G;
+ link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case 100000:
- link.link_speed = ETH_SPEED_NUM_100G;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
}
@@ -397,17 +397,17 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
dev_info->flow_type_rss_offloads = IONIC_ETH_RSS_OFFLOAD_ALL;
dev_info->speed_capa =
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
/*
* Per-queue capabilities
* RTE does not support disabling a feature on a queue if it is
* enabled globally on the device. Thus the driver does not advertise
- * capabilities like DEV_TX_OFFLOAD_IPV4_CKSUM as per-queue even
+ * capabilities like RTE_ETH_TX_OFFLOAD_IPV4_CKSUM as per-queue even
* though the driver would be otherwise capable of disabling it on
* a per-queue basis.
*/
@@ -421,25 +421,25 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
*/
dev_info->rx_offload_capa = dev_info->rx_queue_offload_capa |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
0;
dev_info->tx_offload_capa = dev_info->tx_queue_offload_capa |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
0;
dev_info->rx_desc_lim = rx_desc_lim;
@@ -474,9 +474,9 @@ ionic_flow_ctrl_get(struct rte_eth_dev *eth_dev,
fc_conf->autoneg = 0;
if (idev->port_info->config.pause_type)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return 0;
@@ -498,14 +498,14 @@ ionic_flow_ctrl_set(struct rte_eth_dev *eth_dev,
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
pause_type = IONIC_PORT_PAUSE_TYPE_NONE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
pause_type = IONIC_PORT_PAUSE_TYPE_LINK;
break;
- case RTE_FC_RX_PAUSE:
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
return -ENOTSUP;
}
@@ -556,12 +556,12 @@ ionic_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- num = tbl_sz / RTE_RETA_GROUP_SIZE;
+ num = tbl_sz / RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < num; i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if (reta_conf[i].mask & ((uint64_t)1 << j)) {
- index = (i * RTE_RETA_GROUP_SIZE) + j;
+ index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
lif->rss_ind_tbl[index] = reta_conf[i].reta[j];
}
}
@@ -596,12 +596,12 @@ ionic_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- num = reta_size / RTE_RETA_GROUP_SIZE;
+ num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < num; i++) {
memcpy(reta_conf->reta,
- &lif->rss_ind_tbl[i * RTE_RETA_GROUP_SIZE],
- RTE_RETA_GROUP_SIZE);
+ &lif->rss_ind_tbl[i * RTE_ETH_RETA_GROUP_SIZE],
+ RTE_ETH_RETA_GROUP_SIZE);
reta_conf++;
}
@@ -629,17 +629,17 @@ ionic_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
IONIC_RSS_HASH_KEY_SIZE);
if (lif->rss_types & IONIC_RSS_TYPE_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (lif->rss_types & IONIC_RSS_TYPE_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (lif->rss_types & IONIC_RSS_TYPE_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (lif->rss_types & IONIC_RSS_TYPE_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (lif->rss_types & IONIC_RSS_TYPE_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (lif->rss_types & IONIC_RSS_TYPE_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
rss_conf->rss_hf = rss_hf;
@@ -671,17 +671,17 @@ ionic_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
if (!lif->rss_ind_tbl)
return -EINVAL;
- if (rss_conf->rss_hf & ETH_RSS_IPV4)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
rss_types |= IONIC_RSS_TYPE_IPV4;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
rss_types |= IONIC_RSS_TYPE_IPV4_TCP;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
rss_types |= IONIC_RSS_TYPE_IPV4_UDP;
- if (rss_conf->rss_hf & ETH_RSS_IPV6)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
rss_types |= IONIC_RSS_TYPE_IPV6;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
rss_types |= IONIC_RSS_TYPE_IPV6_TCP;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
rss_types |= IONIC_RSS_TYPE_IPV6_UDP;
ionic_lif_rss_config(lif, rss_types, key, NULL);
@@ -853,15 +853,15 @@ ionic_dev_configure(struct rte_eth_dev *eth_dev)
static inline uint32_t
ionic_parse_link_speeds(uint16_t link_speeds)
{
- if (link_speeds & ETH_LINK_SPEED_100G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100G)
return 100000;
- else if (link_speeds & ETH_LINK_SPEED_50G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_50G)
return 50000;
- else if (link_speeds & ETH_LINK_SPEED_40G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_40G)
return 40000;
- else if (link_speeds & ETH_LINK_SPEED_25G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_25G)
return 25000;
- else if (link_speeds & ETH_LINK_SPEED_10G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_10G)
return 10000;
else
return 0;
@@ -885,12 +885,12 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
IONIC_PRINT_CALL();
allowed_speeds =
- ETH_LINK_SPEED_FIXED |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ RTE_ETH_LINK_SPEED_FIXED |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
if (dev_conf->link_speeds & ~allowed_speeds) {
IONIC_PRINT(ERR, "Invalid link setting");
@@ -907,7 +907,7 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
}
/* Configure link */
- an_enable = (dev_conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+ an_enable = (dev_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
ionic_dev_cmd_port_autoneg(idev, an_enable);
err = ionic_dev_cmd_wait_check(idev, IONIC_DEVCMD_TIMEOUT);
diff --git a/drivers/net/ionic/ionic_ethdev.h b/drivers/net/ionic/ionic_ethdev.h
index 6cbcd0f825a3..652f28c97d57 100644
--- a/drivers/net/ionic/ionic_ethdev.h
+++ b/drivers/net/ionic/ionic_ethdev.h
@@ -8,12 +8,12 @@
#include <rte_ethdev.h>
#define IONIC_ETH_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define IONIC_ETH_DEV_TO_LIF(eth_dev) ((struct ionic_lif *) \
(eth_dev)->data->dev_private)
diff --git a/drivers/net/ionic/ionic_lif.c b/drivers/net/ionic/ionic_lif.c
index 431eda777b78..d4eb6c1d78be 100644
--- a/drivers/net/ionic/ionic_lif.c
+++ b/drivers/net/ionic/ionic_lif.c
@@ -1688,12 +1688,12 @@ ionic_lif_configure_vlan_offload(struct ionic_lif *lif, int mask)
/*
* IONIC_ETH_HW_VLAN_RX_FILTER cannot be turned off, so
- * set DEV_RX_OFFLOAD_VLAN_FILTER and ignore ETH_VLAN_FILTER_MASK
+ * set RTE_ETH_RX_OFFLOAD_VLAN_FILTER and ignore RTE_ETH_VLAN_FILTER_MASK
*/
- rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
lif->features |= IONIC_ETH_HW_VLAN_RX_STRIP;
else
lif->features &= ~IONIC_ETH_HW_VLAN_RX_STRIP;
@@ -1733,19 +1733,19 @@ ionic_lif_configure(struct ionic_lif *lif)
/*
* NB: While it is true that RSS_HASH is always enabled on ionic,
* setting this flag unconditionally causes problems in DTS.
- * rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ * rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
*/
/* RX per-port */
- if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM ||
- rxmode->offloads & DEV_RX_OFFLOAD_UDP_CKSUM ||
- rxmode->offloads & DEV_RX_OFFLOAD_TCP_CKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM ||
+ rxmode->offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM ||
+ rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
lif->features |= IONIC_ETH_HW_RX_CSUM;
else
lif->features &= ~IONIC_ETH_HW_RX_CSUM;
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
lif->features |= IONIC_ETH_HW_RX_SG;
lif->eth_dev->data->scattered_rx = 1;
} else {
@@ -1754,30 +1754,30 @@ ionic_lif_configure(struct ionic_lif *lif)
}
/* Covers VLAN_STRIP */
- ionic_lif_configure_vlan_offload(lif, ETH_VLAN_STRIP_MASK);
+ ionic_lif_configure_vlan_offload(lif, RTE_ETH_VLAN_STRIP_MASK);
/* TX per-port */
- if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
lif->features |= IONIC_ETH_HW_TX_CSUM;
else
lif->features &= ~IONIC_ETH_HW_TX_CSUM;
- if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
lif->features |= IONIC_ETH_HW_VLAN_TX_TAG;
else
lif->features &= ~IONIC_ETH_HW_VLAN_TX_TAG;
- if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
lif->features |= IONIC_ETH_HW_TX_SG;
else
lif->features &= ~IONIC_ETH_HW_TX_SG;
- if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
lif->features |= IONIC_ETH_HW_TSO;
lif->features |= IONIC_ETH_HW_TSO_IPV6;
lif->features |= IONIC_ETH_HW_TSO_ECN;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index b83ea1bcaa6a..0c1f6113d0e9 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -204,11 +204,11 @@ ionic_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id,
txq->flags |= IONIC_QCQ_F_DEFERRED;
/* Convert the offload flags into queue flags */
- if (offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
txq->flags |= IONIC_QCQ_F_CSUM_L3;
- if (offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+ if (offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
txq->flags |= IONIC_QCQ_F_CSUM_TCP;
- if (offloads & DEV_TX_OFFLOAD_UDP_CKSUM)
+ if (offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)
txq->flags |= IONIC_QCQ_F_CSUM_UDP;
eth_dev->data->tx_queues[tx_queue_id] = txq;
@@ -745,11 +745,11 @@ ionic_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
/*
* Note: the interface does not currently support
- * DEV_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
+ * RTE_ETH_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
* when the adapter will be able to keep the CRC and subtract
* it to the length for all received packets:
* if (eth_dev->data->dev_conf.rxmode.offloads &
- * DEV_RX_OFFLOAD_KEEP_CRC)
+ * RTE_ETH_RX_OFFLOAD_KEEP_CRC)
* rxq->crc_len = ETHER_CRC_LEN;
*/
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 589d9fa5877d..2f6df2c2f6b8 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -50,11 +50,11 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
dev_info->speed_capa =
(hw->retimer.mac_type ==
IFPGA_RAWDEV_RETIMER_MAC_TYPE_10GE_XFI) ?
- ETH_LINK_SPEED_10G :
+ RTE_ETH_LINK_SPEED_10G :
((hw->retimer.mac_type ==
IFPGA_RAWDEV_RETIMER_MAC_TYPE_25GE_25GAUI) ?
- ETH_LINK_SPEED_25G :
- ETH_LINK_SPEED_AUTONEG);
+ RTE_ETH_LINK_SPEED_25G :
+ RTE_ETH_LINK_SPEED_AUTONEG);
dev_info->max_rx_queues = 1;
dev_info->max_tx_queues = 1;
@@ -67,31 +67,31 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
};
dev_info->rx_queue_offload_capa = 0;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
dev_info->tx_queue_offload_capa;
dev_info->dev_capa =
@@ -2410,10 +2410,10 @@ ipn3ke_update_link(struct rte_rawdev *rawdev,
(uint64_t *)&link_speed);
switch (link_speed) {
case IFPGA_RAWDEV_LINK_SPEED_10GB:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case IFPGA_RAWDEV_LINK_SPEED_25GB:
- link->link_speed = ETH_SPEED_NUM_25G;
+ link->link_speed = RTE_ETH_SPEED_NUM_25G;
break;
default:
IPN3KE_AFU_PMD_ERR("Unknown link speed info %u", link_speed);
@@ -2471,9 +2471,9 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_autoneg = !(ethdev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
rawdev = hw->rawdev;
ipn3ke_update_link(rawdev, rpst->port_id, &link);
@@ -2529,9 +2529,9 @@ ipn3ke_rpst_link_check(struct ipn3ke_rpst *rpst)
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_autoneg = !(rpst->ethdev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
rawdev = hw->rawdev;
ipn3ke_update_link(rawdev, rpst->port_id, &link);
@@ -2803,10 +2803,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
if (frame_size > IPN3KE_ETH_MAX_LEN)
dev_data->dev_conf.rxmode.offloads |=
- (uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME);
+ (uint64_t)(RTE_ETH_RX_OFFLOAD_JUMBO_FRAME);
else
dev_data->dev_conf.rxmode.offloads &=
- (uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME);
+ (uint64_t)(~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME);
dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b5371568b54d..e425cea05aa8 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1865,7 +1865,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
qinq &= IXGBE_DMATXCTL_GDV;
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
+ case RTE_ETH_VLAN_TYPE_INNER:
if (qinq) {
reg = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
reg = (reg & (~IXGBE_VLNCTRL_VET)) | (uint32_t)tpid;
@@ -1880,7 +1880,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
" by single VLAN");
}
break;
- case ETH_VLAN_TYPE_OUTER:
+ case RTE_ETH_VLAN_TYPE_OUTER:
if (qinq) {
/* Only the high 16-bits is valid */
IXGBE_WRITE_REG(hw, IXGBE_EXVET, (uint32_t)tpid <<
@@ -1967,10 +1967,10 @@ ixgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
if (on) {
rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
rxq->vlan_flags = PKT_RX_VLAN;
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
@@ -2091,7 +2091,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
if (hw->mac.type == ixgbe_mac_82598EB) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
ctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
ctrl |= IXGBE_VLNCTRL_VME;
IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, ctrl);
@@ -2108,7 +2108,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
ctrl = IXGBE_READ_REG(hw, IXGBE_RXDCTL(rxq->reg_idx));
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
ctrl |= IXGBE_RXDCTL_VME;
on = TRUE;
} else {
@@ -2130,17 +2130,17 @@ ixgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
struct ixgbe_rx_queue *rxq;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
rxmode = &dev->data->dev_conf.rxmode;
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
else
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
}
@@ -2151,19 +2151,18 @@ ixgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK)
ixgbe_vlan_hw_strip_config(dev);
- }
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ixgbe_vlan_hw_filter_enable(dev);
else
ixgbe_vlan_hw_filter_disable(dev);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
ixgbe_vlan_hw_extend_enable(dev);
else
ixgbe_vlan_hw_extend_disable(dev);
@@ -2202,10 +2201,10 @@ ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
switch (nb_rx_q) {
case 1:
case 2:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
break;
case 4:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
break;
default:
return -EINVAL;
@@ -2229,18 +2228,18 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* check multi-queue mode */
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
break;
- case ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
PMD_INIT_LOG(ERR, "SRIOV active,"
" unsupported mq_mode rx %d.",
dev_conf->rxmode.mq_mode);
return -EINVAL;
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
if (ixgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -2250,12 +2249,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
break;
- case ETH_MQ_RX_VMDQ_ONLY:
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_NONE:
/* if nothing mq mode configure, use default scheme */
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
break;
- default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+ default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
/* SRIOV only works in VMDq enable mode */
PMD_INIT_LOG(ERR, "SRIOV is active,"
" wrong mq_mode rx %d.",
@@ -2264,12 +2263,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
}
switch (dev_conf->txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
- dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+ dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
break;
- default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
- dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
+ default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
+ dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_ONLY;
break;
}
@@ -2284,13 +2283,13 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
} else {
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
" not supported.");
return -EINVAL;
}
/* check configuration for vmdb+dcb mode */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_conf *conf;
if (nb_rx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2299,15 +2298,15 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools must be %d or %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_tx_conf *conf;
if (nb_tx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2316,39 +2315,39 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools != %d and"
" nb_queue_pools != %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
/* For DCB mode check our configuration before we go further */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
const struct rte_eth_dcb_rx_conf *conf;
conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
const struct rte_eth_dcb_tx_conf *conf;
conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
@@ -2357,7 +2356,7 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
* When DCB/VT is off, maximum number of queues changes,
* except for 82598EB, which remains constant.
*/
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
hw->mac.type != ixgbe_mac_82598EB) {
if (nb_tx_q > IXGBE_NONE_MODE_TX_NB_QUEUES) {
PMD_INIT_LOG(ERR,
@@ -2381,8 +2380,8 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multipe queue mode checking */
ret = ixgbe_check_mq_mode(dev);
@@ -2627,15 +2626,15 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
goto error;
}
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = ixgbe_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
goto error;
}
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
/* Enable vlan filtering for VMDq */
ixgbe_vmdq_vlan_hw_filter_enable(dev);
}
@@ -2712,17 +2711,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
case ixgbe_mac_X550EM_a:
- allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G | ETH_LINK_SPEED_5G |
- ETH_LINK_SPEED_10G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G | RTE_ETH_LINK_SPEED_5G |
+ RTE_ETH_LINK_SPEED_10G;
if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
- allowed_speeds = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
break;
default:
- allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G;
}
link_speeds = &dev->data->dev_conf.link_speeds;
@@ -2736,7 +2735,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
}
speed = 0x0;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
switch (hw->mac.type) {
case ixgbe_mac_82598EB:
speed = IXGBE_LINK_SPEED_82598_AUTONEG;
@@ -2754,17 +2753,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
speed = IXGBE_LINK_SPEED_82599_AUTONEG;
}
} else {
- if (*link_speeds & ETH_LINK_SPEED_10G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
speed |= IXGBE_LINK_SPEED_10GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
speed |= IXGBE_LINK_SPEED_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_2_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
speed |= IXGBE_LINK_SPEED_2_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_1G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
speed |= IXGBE_LINK_SPEED_1GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_100M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
speed |= IXGBE_LINK_SPEED_100_FULL;
- if (*link_speeds & ETH_LINK_SPEED_10M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
speed |= IXGBE_LINK_SPEED_10_FULL;
}
@@ -3839,7 +3838,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
* When DCB/VT is off, maximum number of queues changes,
* except for 82598EB, which remains constant.
*/
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
hw->mac.type != ixgbe_mac_82598EB)
dev_info->max_tx_queues = IXGBE_NONE_MODE_TX_NB_QUEUES;
}
@@ -3849,9 +3848,9 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
if (hw->mac.type == ixgbe_mac_82598EB)
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
else
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->max_mtu = dev_info->max_rx_pktlen - IXGBE_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
dev_info->vmdq_queue_num = dev_info->max_rx_queues;
@@ -3890,21 +3889,21 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->reta_size = ixgbe_reta_size_get(hw->mac.type);
dev_info->flow_type_rss_offloads = IXGBE_RSS_OFFLOAD_ALL;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
- dev_info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
if (hw->mac.type == ixgbe_mac_X540 ||
hw->mac.type == ixgbe_mac_X540_vf ||
hw->mac.type == ixgbe_mac_X550 ||
hw->mac.type == ixgbe_mac_X550_vf) {
- dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
}
if (hw->mac.type == ixgbe_mac_X550) {
- dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
- dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
}
/* Driver-preferred Rx/Tx parameters */
@@ -3973,9 +3972,9 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
if (hw->mac.type == ixgbe_mac_82598EB)
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
else
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->rx_queue_offload_capa = ixgbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (ixgbe_get_rx_port_offloads(dev) |
dev_info->rx_queue_offload_capa);
@@ -4218,11 +4217,11 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
u32 esdp_reg;
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
hw->mac.get_link_status = true;
@@ -4244,8 +4243,8 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
diag = ixgbe_check_link(hw, &link_speed, &link_up, wait);
if (diag != 0) {
- link.link_speed = ETH_SPEED_NUM_100M;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -4281,37 +4280,37 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
return rte_eth_linkstatus_set(dev, &link);
}
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (link_speed) {
default:
case IXGBE_LINK_SPEED_UNKNOWN:
- link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
case IXGBE_LINK_SPEED_10_FULL:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case IXGBE_LINK_SPEED_100_FULL:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case IXGBE_LINK_SPEED_1GB_FULL:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case IXGBE_LINK_SPEED_2_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_2_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case IXGBE_LINK_SPEED_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case IXGBE_LINK_SPEED_10GB_FULL:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
}
@@ -4528,7 +4527,7 @@ ixgbe_dev_link_status_print(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -4747,13 +4746,13 @@ ixgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
tx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -5051,8 +5050,8 @@ ixgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IXGBE_4_BIT_MASK);
if (!mask)
@@ -5099,8 +5098,8 @@ ixgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) &
IXGBE_4_BIT_MASK);
if (!mask)
@@ -5199,11 +5198,11 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if (frame_size > IXGBE_ETH_MAX_LEN) {
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
} else {
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
}
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
@@ -5271,22 +5270,22 @@ ixgbevf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
dev->data->port_id);
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/*
* VF has no ability to enable/disable HW CRC
* Keep the persistent behavior the same as Host PF
*/
#ifndef RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
- conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#else
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#endif
@@ -5346,8 +5345,8 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
ixgbevf_set_vfta_all(dev, 1);
/* Set HW strip */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = ixgbevf_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -5581,10 +5580,10 @@ ixgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
int on = 0;
/* VF function only support hw strip feature, others are not support */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- on = !!(rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ on = !!(rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
ixgbevf_vlan_strip_queue_set(dev, i, on);
}
}
@@ -5715,12 +5714,12 @@ ixgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
return -ENOTSUP;
if (on) {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = ~0;
IXGBE_WRITE_REG(hw, IXGBE_UTA(i), ~0);
}
} else {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = 0;
IXGBE_WRITE_REG(hw, IXGBE_UTA(i), 0);
}
@@ -5734,15 +5733,15 @@ ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
{
uint32_t new_val = orig_val;
- if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
new_val |= IXGBE_VMOLR_AUPE;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
new_val |= IXGBE_VMOLR_ROMPE;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
new_val |= IXGBE_VMOLR_ROPE;
- if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
new_val |= IXGBE_VMOLR_BAM;
- if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
new_val |= IXGBE_VMOLR_MPE;
return new_val;
@@ -5753,8 +5752,8 @@ ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
#define IXGBE_MRCTL_DPME 0x04 /* Downlink Port Mirroring. */
#define IXGBE_MRCTL_VLME 0x08 /* VLAN Mirroring. */
#define IXGBE_INVALID_MIRROR_TYPE(mirror_type) \
- ((mirror_type) & ~(uint8_t)(ETH_MIRROR_VIRTUAL_POOL_UP | \
- ETH_MIRROR_UPLINK_PORT | ETH_MIRROR_DOWNLINK_PORT | ETH_MIRROR_VLAN))
+ ((mirror_type) & ~(uint8_t)(RTE_ETH_MIRROR_VIRTUAL_POOL_UP | \
+ RTE_ETH_MIRROR_UPLINK_PORT | RTE_ETH_MIRROR_DOWNLINK_PORT | RTE_ETH_MIRROR_VLAN))
static int
ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
@@ -5794,7 +5793,7 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
return -EINVAL;
}
- if (mirror_conf->rule_type & ETH_MIRROR_VLAN) {
+ if (mirror_conf->rule_type & RTE_ETH_MIRROR_VLAN) {
mirror_type |= IXGBE_MRCTL_VLME;
/* Check if vlan id is valid and find conresponding VLAN ID
* index in VLVF
@@ -5827,7 +5826,7 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
mr_info->mr_conf[rule_id].vlan.vlan_mask =
mirror_conf->vlan.vlan_mask;
- for (i = 0; i < ETH_VMDQ_MAX_VLAN_FILTERS; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_MAX_VLAN_FILTERS; i++) {
if (mirror_conf->vlan.vlan_mask & (1ULL << i))
mr_info->mr_conf[rule_id].vlan.vlan_id[i] =
mirror_conf->vlan.vlan_id[i];
@@ -5836,7 +5835,7 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
mv_lsb = 0;
mv_msb = 0;
mr_info->mr_conf[rule_id].vlan.vlan_mask = 0;
- for (i = 0; i < ETH_VMDQ_MAX_VLAN_FILTERS; i++)
+ for (i = 0; i < RTE_ETH_VMDQ_MAX_VLAN_FILTERS; i++)
mr_info->mr_conf[rule_id].vlan.vlan_id[i] = 0;
}
}
@@ -5845,7 +5844,7 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
* if enable pool mirror, write related pool mask register,if disable
* pool mirror, clear PFMRVM register
*/
- if (mirror_conf->rule_type & ETH_MIRROR_VIRTUAL_POOL_UP) {
+ if (mirror_conf->rule_type & RTE_ETH_MIRROR_VIRTUAL_POOL_UP) {
mirror_type |= IXGBE_MRCTL_VPME;
if (on) {
mp_lsb = mirror_conf->pool_mask & 0xFFFFFFFF;
@@ -5859,9 +5858,9 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
mr_info->mr_conf[rule_id].pool_mask = 0;
}
}
- if (mirror_conf->rule_type & ETH_MIRROR_UPLINK_PORT)
+ if (mirror_conf->rule_type & RTE_ETH_MIRROR_UPLINK_PORT)
mirror_type |= IXGBE_MRCTL_UPME;
- if (mirror_conf->rule_type & ETH_MIRROR_DOWNLINK_PORT)
+ if (mirror_conf->rule_type & RTE_ETH_MIRROR_DOWNLINK_PORT)
mirror_type |= IXGBE_MRCTL_DPME;
/* read mirror control register and recalculate it */
@@ -5882,13 +5881,13 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
IXGBE_WRITE_REG(hw, IXGBE_MRCTL(rule_id), mr_ctl);
/* write pool mirrror control register */
- if (mirror_conf->rule_type & ETH_MIRROR_VIRTUAL_POOL_UP) {
+ if (mirror_conf->rule_type & RTE_ETH_MIRROR_VIRTUAL_POOL_UP) {
IXGBE_WRITE_REG(hw, IXGBE_VMRVM(rule_id), mp_lsb);
IXGBE_WRITE_REG(hw, IXGBE_VMRVM(rule_id + rule_mr_offset),
mp_msb);
}
/* write VLAN mirrror control register */
- if (mirror_conf->rule_type & ETH_MIRROR_VLAN) {
+ if (mirror_conf->rule_type & RTE_ETH_MIRROR_VLAN) {
IXGBE_WRITE_REG(hw, IXGBE_VMRVLAN(rule_id), mv_lsb);
IXGBE_WRITE_REG(hw, IXGBE_VMRVLAN(rule_id + rule_mr_offset),
mv_msb);
@@ -6266,8 +6265,8 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
* register. MMW_SIZE=0x014 if 9728-byte jumbo is supported, otherwise
* set as 0x4.
*/
- if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE))
+ if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) &&
+ rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE)
IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
IXGBE_MMW_SIZE_JUMBO_FRAME);
else
@@ -6942,15 +6941,15 @@ ixgbe_start_timecounters(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
switch (link.link_speed) {
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
incval = IXGBE_INCVAL_100;
shift = IXGBE_INCVAL_SHIFT_100;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
incval = IXGBE_INCVAL_1GB;
shift = IXGBE_INCVAL_SHIFT_1GB;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
default:
incval = IXGBE_INCVAL_10GB;
shift = IXGBE_INCVAL_SHIFT_10GB;
@@ -7361,16 +7360,16 @@ ixgbe_reta_size_get(enum ixgbe_mac_type mac_type) {
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
case ixgbe_mac_X550EM_a:
- return ETH_RSS_RETA_SIZE_512;
+ return RTE_ETH_RSS_RETA_SIZE_512;
case ixgbe_mac_X550_vf:
case ixgbe_mac_X550EM_x_vf:
case ixgbe_mac_X550EM_a_vf:
- return ETH_RSS_RETA_SIZE_64;
+ return RTE_ETH_RSS_RETA_SIZE_64;
case ixgbe_mac_X540_vf:
case ixgbe_mac_82599_vf:
return 0;
default:
- return ETH_RSS_RETA_SIZE_128;
+ return RTE_ETH_RSS_RETA_SIZE_128;
}
}
@@ -7380,10 +7379,10 @@ ixgbe_reta_reg_get(enum ixgbe_mac_type mac_type, uint16_t reta_idx) {
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
case ixgbe_mac_X550EM_a:
- if (reta_idx < ETH_RSS_RETA_SIZE_128)
+ if (reta_idx < RTE_ETH_RSS_RETA_SIZE_128)
return IXGBE_RETA(reta_idx >> 2);
else
- return IXGBE_ERETA((reta_idx - ETH_RSS_RETA_SIZE_128) >> 2);
+ return IXGBE_ERETA((reta_idx - RTE_ETH_RSS_RETA_SIZE_128) >> 2);
case ixgbe_mac_X550_vf:
case ixgbe_mac_X550EM_x_vf:
case ixgbe_mac_X550EM_a_vf:
@@ -7439,7 +7438,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
uint8_t nb_tcs;
uint8_t i, j;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
else
dcb_info->nb_tcs = 1;
@@ -7450,7 +7449,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
if (dcb_config->vt_mode) { /* vt is enabled*/
struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
for (j = 0; j < nb_tcs; j++) {
@@ -7474,9 +7473,9 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
} else { /* vt is disabled*/
struct rte_eth_dcb_rx_conf *rx_conf =
&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
- if (dcb_info->nb_tcs == ETH_4_TCS) {
+ if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7489,7 +7488,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
- } else if (dcb_info->nb_tcs == ETH_8_TCS) {
+ } else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7742,7 +7741,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
}
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = ixgbe_e_tag_filter_add(dev, l2_tunnel);
break;
default:
@@ -7774,7 +7773,7 @@ ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
return ret;
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = ixgbe_e_tag_filter_del(dev, l2_tunnel);
break;
default:
@@ -7871,12 +7870,12 @@ ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ixgbe_add_vxlan_port(hw, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -EINVAL;
break;
@@ -7908,11 +7907,11 @@ ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ixgbe_del_vxlan_port(hw, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -EINVAL;
break;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index a0ce18ca246b..3443154589e8 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -113,15 +113,15 @@
#define IXGBE_FDIR_NVGRE_TUNNEL_TYPE 0x0
#define IXGBE_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define IXGBE_VF_IRQ_ENABLE_MASK 3 /* vf irq enable mask */
#define IXGBE_VF_MAXMSIVECTOR 1
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 27a49bbce5e7..7894047829a8 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -90,9 +90,9 @@ static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
static uint32_t ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
uint32_t key);
static uint32_t atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc);
+ enum rte_eth_fdir_pballoc_type pballoc);
static uint32_t atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc);
+ enum rte_eth_fdir_pballoc_type pballoc);
static int fdir_write_perfect_filter_82599(struct ixgbe_hw *hw,
union ixgbe_atr_input *input, uint8_t queue,
uint32_t fdircmd, uint32_t fdirhash,
@@ -163,20 +163,20 @@ fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl)
* flexbytes matching field, and drop queue (only for perfect matching mode).
*/
static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf, uint32_t *fdirctrl)
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf, uint32_t *fdirctrl)
{
*fdirctrl = 0;
switch (conf->pballoc) {
- case RTE_FDIR_PBALLOC_64K:
+ case RTE_ETH_FDIR_PBALLOC_64K:
/* 8k - 1 signature filters */
*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_64K;
break;
- case RTE_FDIR_PBALLOC_128K:
+ case RTE_ETH_FDIR_PBALLOC_128K:
/* 16k - 1 signature filters */
*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_128K;
break;
- case RTE_FDIR_PBALLOC_256K:
+ case RTE_ETH_FDIR_PBALLOC_256K:
/* 32k - 1 signature filters */
*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_256K;
break;
@@ -807,13 +807,13 @@ ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
static uint32_t
atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
return ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
PERFECT_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
return ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
PERFECT_BUCKET_128KB_HASH_MASK;
@@ -850,15 +850,15 @@ ixgbe_fdir_check_cmd_complete(struct ixgbe_hw *hw, uint32_t *fdircmd)
*/
static uint32_t
atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
uint32_t bucket_hash, sig_hash;
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
bucket_hash = ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
SIG_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
bucket_hash = ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
SIG_BUCKET_128KB_HASH_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 511b612f7fe4..0557de6c1aa5 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -1259,7 +1259,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
return -rte_errno;
}
- filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+ filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
/**
* grp and e_cid_base are bit fields and only use 14 bits.
* e-tag id is taken as little endian by HW.
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
index e45c5501e6bf..944c9f23809e 100644
--- a/drivers/net/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -392,7 +392,7 @@ ixgbe_crypto_create_session(void *device,
aead_xform = &conf->crypto_xform->aead;
if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -400,7 +400,7 @@ ixgbe_crypto_create_session(void *device,
return -ENOTSUP;
}
} else {
- if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -633,11 +633,11 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
tx_offloads = dev->data->dev_conf.txmode.offloads;
/* sanity checks */
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
return -1;
}
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
return -1;
}
@@ -657,7 +657,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
- if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
if (reg != 0) {
@@ -665,7 +665,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
return -1;
}
}
- if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL,
IXGBE_SECTXCTRL_STORE_FORWARD);
reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index fbf2b17d160f..d03238b728ba 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -107,15 +107,15 @@ int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
memset(uta_info, 0, sizeof(struct ixgbe_uta_info));
hw->mac.mc_filter_type = 0;
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
nb_queue = 2;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
- } else if (vf_num >= ETH_16_POOLS) {
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
nb_queue = 4;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
} else {
nb_queue = 8;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
}
RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -266,15 +266,15 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
gpie |= IXGBE_GPIE_MSIX_MODE | IXGBE_GPIE_PBA_SUPPORT;
switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
gcr_ext |= IXGBE_GCR_EXT_VT_MODE_64;
gpie |= IXGBE_GPIE_VTMODE_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
gcr_ext |= IXGBE_GCR_EXT_VT_MODE_32;
gpie |= IXGBE_GPIE_VTMODE_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
gcr_ext |= IXGBE_GCR_EXT_VT_MODE_16;
gpie |= IXGBE_GPIE_VTMODE_16;
break;
@@ -604,11 +604,11 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
if (max_frame > IXGBE_ETH_MAX_LEN) {
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
} else {
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
}
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
@@ -684,29 +684,29 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
/* Notify VF of number of DCB traffic classes */
eth_conf = &dev->data->dev_conf;
switch (eth_conf->txmode.mq_mode) {
- case ETH_MQ_TX_NONE:
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_NONE:
+ case RTE_ETH_MQ_TX_DCB:
PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
", but its tx mode = %d\n", vf,
eth_conf->txmode.mq_mode);
return -1;
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
vmdq_dcb_tx_conf = ð_conf->tx_adv_conf.vmdq_dcb_tx_conf;
switch (vmdq_dcb_tx_conf->nb_queue_pools) {
- case ETH_16_POOLS:
- num_tcs = ETH_8_TCS;
+ case RTE_ETH_16_POOLS:
+ num_tcs = RTE_ETH_8_TCS;
break;
- case ETH_32_POOLS:
- num_tcs = ETH_4_TCS;
+ case RTE_ETH_32_POOLS:
+ num_tcs = RTE_ETH_4_TCS;
break;
default:
return -1;
}
break;
- /* ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
- case ETH_MQ_TX_VMDQ_ONLY:
+ /* RTE_ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
+ case RTE_ETH_MQ_TX_VMDQ_ONLY:
hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
vmvir = IXGBE_READ_REG(hw, IXGBE_VMVIR(vf));
vlana = vmvir & IXGBE_VMVIR_VLANA_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index c814a28cb49a..4e712c2b5e61 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2591,26 +2591,26 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
if (hw->mac.type == ixgbe_mac_82599EB ||
hw->mac.type == ixgbe_mac_X540)
- tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
if (hw->mac.type == ixgbe_mac_X550 ||
hw->mac.type == ixgbe_mac_X550EM_x ||
hw->mac.type == ixgbe_mac_X550EM_a)
- tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
#endif
return tx_offload_capa;
}
@@ -2778,7 +2778,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->tx_deferred_start = tx_conf->tx_deferred_start;
#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SECURITY);
+ RTE_ETH_TX_OFFLOAD_SECURITY);
#endif
/*
@@ -3014,7 +3014,7 @@ ixgbe_get_rx_queue_offloads(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (hw->mac.type != ixgbe_mac_82598EB)
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
return offloads;
}
@@ -3025,20 +3025,20 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
uint64_t offloads;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- offloads = DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (hw->mac.type == ixgbe_mac_82598EB)
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (ixgbe_is_vf(dev) == 0)
- offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
/*
* RSC is only supported by 82599 and x540 PF devices in a non-SR-IOV
@@ -3048,20 +3048,20 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_X540 ||
hw->mac.type == ixgbe_mac_X550) &&
!RTE_ETH_DEV_SRIOV(dev).active)
- offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
if (hw->mac.type == ixgbe_mac_82599EB ||
hw->mac.type == ixgbe_mac_X540)
- offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
if (hw->mac.type == ixgbe_mac_X550 ||
hw->mac.type == ixgbe_mac_X550EM_x ||
hw->mac.type == ixgbe_mac_X550EM_a)
- offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- offloads |= DEV_RX_OFFLOAD_SECURITY;
+ offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
#endif
return offloads;
@@ -3116,7 +3116,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -3520,23 +3520,23 @@ ixgbe_hw_rss_hash_set(struct ixgbe_hw *hw, struct rte_eth_rss_conf *rss_conf)
/* Set configured hashing protocols in MRQC register */
rss_hf = rss_conf->rss_hf;
mrqc = IXGBE_MRQC_RSSEN; /* Enable RSS */
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_TCP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6;
- if (rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_TCP;
- if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_UDP;
- if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP;
IXGBE_WRITE_REG(hw, mrqc_reg, mrqc);
}
@@ -3618,23 +3618,23 @@ ixgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
}
rss_hf = 0;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP)
- rss_hf |= ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
rss_conf->rss_hf = rss_hf;
return 0;
}
@@ -3710,12 +3710,12 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
num_pools = cfg->nb_queue_pools;
/* Check we have a valid number of pools */
- if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+ if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
ixgbe_rss_disable(dev);
return;
}
/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
- nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+ nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
/*
* RXPBSIZE
@@ -3740,7 +3740,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
}
/* zero alloc all unused TCs */
- for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
uint32_t rxpbsize = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i));
rxpbsize &= (~(0x3FF << IXGBE_RXPBSIZE_SHIFT));
@@ -3749,7 +3749,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
}
/* MRQC: enable vmdq and dcb */
- mrqc = (num_pools == ETH_16_POOLS) ?
+ mrqc = (num_pools == RTE_ETH_16_POOLS) ?
IXGBE_MRQC_VMDQRT8TCEN : IXGBE_MRQC_VMDQRT4TCEN;
IXGBE_WRITE_REG(hw, IXGBE_MRQC, mrqc);
@@ -3765,7 +3765,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
/* RTRUP2TC: mapping user priorities to traffic classes (TCs) */
queue_mapping = 0;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
/*
* mapping is done with 3 bits per priority,
* so shift by i*3 each time
@@ -3789,7 +3789,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
/* VFRE: pool enabling for receive - 16 or 32 */
IXGBE_WRITE_REG(hw, IXGBE_VFRE(0),
- num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
/*
* MPSAR - allow pools to read specific mac addresses
@@ -3871,7 +3871,7 @@ ixgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
if (hw->mac.type != ixgbe_mac_82598EB)
/*PF VF Transmit Enable*/
IXGBE_WRITE_REG(hw, IXGBE_VFTE(0),
- vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
/*Configure general DCB TX parameters*/
ixgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3887,12 +3887,12 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
- if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3902,7 +3902,7 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3920,12 +3920,12 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
- if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3935,7 +3935,7 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3962,7 +3962,7 @@ ixgbe_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3989,7 +3989,7 @@ ixgbe_dcb_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -4158,7 +4158,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
dcb_config->vt_mode = true;
if (hw->mac.type != ixgbe_mac_82598EB) {
config_dcb_rx = DCB_RX_CONFIG;
@@ -4171,8 +4171,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
ixgbe_vmdq_dcb_configure(dev);
}
break;
- case ETH_MQ_RX_DCB:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_DCB_RSS:
dcb_config->vt_mode = false;
config_dcb_rx = DCB_RX_CONFIG;
/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -4185,7 +4185,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
break;
}
switch (dev->data->dev_conf.txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
dcb_config->vt_mode = true;
config_dcb_tx = DCB_TX_CONFIG;
/* get DCB and VT TX configuration parameters
@@ -4196,7 +4196,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
ixgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
break;
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_DCB:
dcb_config->vt_mode = false;
config_dcb_tx = DCB_TX_CONFIG;
/*get DCB TX configuration parameters from rte_eth_conf*/
@@ -4212,15 +4212,15 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
nb_tcs = dcb_config->num_tcs.pfc_tcs;
/* Unpack map */
ixgbe_dcb_unpack_map_cee(dcb_config, IXGBE_DCB_RX_CONFIG, map);
- if (nb_tcs == ETH_4_TCS) {
+ if (nb_tcs == RTE_ETH_4_TCS) {
/* Avoid un-configured priority mapping to TC0 */
uint8_t j = 4;
uint8_t mask = 0xFF;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
mask = (uint8_t)(mask & (~(1 << map[i])));
for (i = 0; mask && (i < IXGBE_DCB_MAX_TRAFFIC_CLASS); i++) {
- if ((mask & 0x1) && (j < ETH_DCB_NUM_USER_PRIORITIES))
+ if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
map[j++] = i;
mask >>= 1;
}
@@ -4270,9 +4270,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
}
/* zero alloc all unused TCs */
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
- }
}
if (config_dcb_tx) {
/* Only support an equally distributed
@@ -4286,7 +4285,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), txpbthresh);
}
/* Clear unused TCs, if any, to zero buffer size*/
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), 0);
IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), 0);
}
@@ -4322,7 +4321,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
ixgbe_dcb_config_tc_stats_82599(hw, dcb_config);
/* Check if the PFC is supported */
- if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
for (i = 0; i < nb_tcs; i++) {
/*
@@ -4336,7 +4335,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
tc->pfc = ixgbe_dcb_pfc_enabled;
}
ixgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
- if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+ if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
pfc_en &= 0x0F;
ret = ixgbe_dcb_config_pfc(hw, pfc_en, map);
}
@@ -4357,12 +4356,12 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* check support mq_mode for DCB */
- if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
- (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
- (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS))
+ if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
return;
- if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+ if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
return;
/** Configure DCB hardware **/
@@ -4418,7 +4417,7 @@ ixgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
/* VFRE: pool enabling for receive - 64 */
IXGBE_WRITE_REG(hw, IXGBE_VFRE(0), UINT32_MAX);
- if (num_pools == ETH_64_POOLS)
+ if (num_pools == RTE_ETH_64_POOLS)
IXGBE_WRITE_REG(hw, IXGBE_VFRE(1), UINT32_MAX);
/*
@@ -4539,11 +4538,11 @@ ixgbe_config_vf_rss(struct rte_eth_dev *dev)
mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC);
mrqc &= ~IXGBE_MRQC_MRQE_MASK;
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mrqc |= IXGBE_MRQC_VMDQRSS64EN;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mrqc |= IXGBE_MRQC_VMDQRSS32EN;
break;
@@ -4564,17 +4563,17 @@ ixgbe_config_vf_default(struct rte_eth_dev *dev)
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
IXGBE_WRITE_REG(hw, IXGBE_MRQC,
IXGBE_MRQC_VMDQEN);
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
IXGBE_WRITE_REG(hw, IXGBE_MRQC,
IXGBE_MRQC_VMDQRT4TCEN);
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
IXGBE_WRITE_REG(hw, IXGBE_MRQC,
IXGBE_MRQC_VMDQRT8TCEN);
break;
@@ -4601,21 +4600,21 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* any DCB/RSS w/o VMDq multi-queue setting
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_DCB_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
ixgbe_rss_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
ixgbe_vmdq_dcb_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
ixgbe_vmdq_rx_hw_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
default:
/* if mq_mode is none, disable rss mode.*/
ixgbe_rss_disable(dev);
@@ -4626,18 +4625,18 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* Support RSS together with SRIOV.
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
ixgbe_config_vf_rss(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
- case ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_DCB:
/* In SRIOV, the configuration is the same as VMDq case */
ixgbe_vmdq_dcb_configure(dev);
break;
/* DCB/RSS together with SRIOV is not supported */
- case ETH_MQ_RX_VMDQ_DCB_RSS:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
PMD_INIT_LOG(ERR,
"Could not support DCB/RSS with VMDq & SRIOV");
return -1;
@@ -4671,7 +4670,7 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV inactive scheme
* any DCB w/o VMDq multi-queue setting
*/
- if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+ if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
ixgbe_vmdq_tx_hw_configure(hw);
else {
mtqc = IXGBE_MTQC_64Q_1PB;
@@ -4684,13 +4683,13 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV active scheme
* FIXME if support DCB together with VMDq & SRIOV
*/
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_64VF;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_32VF;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_RT_ENA |
IXGBE_MTQC_8TC_8TQ;
break;
@@ -4898,7 +4897,7 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
rxq->rx_using_sse = rx_using_sse;
#ifdef RTE_LIB_SECURITY
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SECURITY);
+ RTE_ETH_RX_OFFLOAD_SECURITY);
#endif
}
}
@@ -4926,10 +4925,10 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
/* Sanity check */
dev->dev_ops->dev_infos_get(dev, &dev_info);
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
rsc_capable = true;
- if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
"support it");
return -EINVAL;
@@ -4937,8 +4936,8 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
/* RSC global configuration (chapter 4.6.7.2.1 of 82599 Spec) */
- if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
- (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+ (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
/*
* According to chapter of 4.6.7.2.1 of the Spec Rev.
* 3.0 RSC configuration requires HW CRC stripping being
@@ -4952,7 +4951,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
/* RFCTL configuration */
rfctl = IXGBE_READ_REG(hw, IXGBE_RFCTL);
- if ((rsc_capable) && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if ((rsc_capable) && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
rfctl &= ~IXGBE_RFCTL_RSC_DIS;
else
rfctl |= IXGBE_RFCTL_RSC_DIS;
@@ -4961,7 +4960,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
IXGBE_WRITE_REG(hw, IXGBE_RFCTL, rfctl);
/* If LRO hasn't been requested - we are done here. */
- if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
return 0;
/* Set RDRXCTL.RSCACKC bit */
@@ -5082,7 +5081,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
* Configure CRC stripping, if any.
*/
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
hlreg0 &= ~IXGBE_HLREG0_RXCRCSTRP;
else
hlreg0 |= IXGBE_HLREG0_RXCRCSTRP;
@@ -5090,7 +5089,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
@@ -5119,7 +5118,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Setup RX queues */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
@@ -5128,7 +5127,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -5171,11 +5170,11 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
2 * IXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
- if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
dev->data->scattered_rx = 1;
/*
@@ -5190,7 +5189,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
*/
rxcsum = IXGBE_READ_REG(hw, IXGBE_RXCSUM);
rxcsum |= IXGBE_RXCSUM_PCSD;
- if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= IXGBE_RXCSUM_IPPCSE;
else
rxcsum &= ~IXGBE_RXCSUM_IPPCSE;
@@ -5200,7 +5199,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
if (hw->mac.type == ixgbe_mac_82599EB ||
hw->mac.type == ixgbe_mac_X540) {
rdrxctl = IXGBE_READ_REG(hw, IXGBE_RDRXCTL);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rdrxctl &= ~IXGBE_RDRXCTL_CRCSTRIP;
else
rdrxctl |= IXGBE_RDRXCTL_CRCSTRIP;
@@ -5406,9 +5405,9 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
#ifdef RTE_LIB_SECURITY
if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SECURITY) ||
+ RTE_ETH_RX_OFFLOAD_SECURITY) ||
(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SECURITY)) {
+ RTE_ETH_TX_OFFLOAD_SECURITY)) {
ret = ixgbe_crypto_enable_ipsec(dev);
if (ret != 0) {
PMD_DRV_LOG(ERR,
@@ -5696,7 +5695,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Setup RX queues */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
@@ -5745,7 +5744,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
buf_size = (uint16_t) ((srrctl & IXGBE_SRRCTL_BSIZEPKT_MASK) <<
IXGBE_SRRCTL_BSIZEPKT_SHIFT);
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
(rxmode->max_rx_pkt_len +
2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
@@ -5754,8 +5753,8 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
dev->data->scattered_rx = 1;
}
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
/* Set RQPL for VF RSS according to max Rx queue */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 476ef62cfda2..220efffe4d08 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -133,7 +133,7 @@ struct ixgbe_rx_queue {
uint8_t rx_udp_csum_zero_err;
/** flags to set in mbuf when a vlan is detected. */
uint64_t vlan_flags;
- uint64_t offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
struct rte_mbuf fake_mbuf;
/** hold packets to return to application */
@@ -226,7 +226,7 @@ struct ixgbe_tx_queue {
uint8_t pthresh; /**< Prefetch threshold register. */
uint8_t hthresh; /**< Host threshold register. */
uint8_t wthresh; /**< Write-back threshold reg. */
- uint64_t offloads; /**< Tx offload flags of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
uint32_t ctx_curr; /**< Hardware context states. */
/** Hardware context0 history. */
struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index adba855ca30f..714707941537 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -278,7 +278,7 @@ static inline int
ixgbe_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
{
#ifndef RTE_LIBRTE_IEEE1588
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
/* no fdir support */
if (fconf->mode != RTE_FDIR_MODE_NONE)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index a8407e742e6d..c2ab3131f22e 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -119,14 +119,14 @@ ixgbe_tc_nb_get(struct rte_eth_dev *dev)
uint8_t nb_tcs = 0;
eth_conf = &dev->data->dev_conf;
- if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
- } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ } else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
- ETH_32_POOLS)
- nb_tcs = ETH_4_TCS;
+ RTE_ETH_32_POOLS)
+ nb_tcs = RTE_ETH_4_TCS;
else
- nb_tcs = ETH_8_TCS;
+ nb_tcs = RTE_ETH_8_TCS;
} else {
nb_tcs = 1;
}
@@ -375,10 +375,10 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
if (vf_num) {
/* no DCB */
if (nb_tcs == 1) {
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
*nb = 2;
*base = vf_num * 2;
- } else if (vf_num >= ETH_16_POOLS) {
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
*nb = 4;
*base = vf_num * 4;
} else {
@@ -392,7 +392,7 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
}
} else {
/* VT off */
- if (nb_tcs == ETH_8_TCS) {
+ if (nb_tcs == RTE_ETH_8_TCS) {
switch (tc_node_no) {
case 0:
*base = 0;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index d5b636a19408..536e33010703 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -58,20 +58,20 @@ ixgbe_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
/**< Maximum number of MAC addresses. */
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
/**< Device RX offload capabilities. */
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/**< Device TX offload capabilities. */
dev_info->speed_capa =
representor->pf_ethdev->data->dev_link.link_speed;
- /**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+ /**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
dev_info->switch_info.name =
representor->pf_ethdev->device->name;
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c
index cf089cd9aee5..9729f8575f53 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.c
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c
@@ -303,10 +303,10 @@ rte_pmd_ixgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
*/
if (hw->mac.type == ixgbe_mac_82598EB)
queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
- ETH_16_POOLS;
+ RTE_ETH_16_POOLS;
else
queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
- ETH_64_POOLS;
+ RTE_ETH_64_POOLS;
for (q = 0; q < queues_per_pool; q++)
(*dev->dev_ops->vlan_strip_queue_set)(dev,
@@ -736,14 +736,14 @@ rte_pmd_ixgbe_set_tc_bw_alloc(uint16_t port,
bw_conf = IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
eth_conf = &dev->data->dev_conf;
- if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
- } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ } else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
- ETH_32_POOLS)
- nb_tcs = ETH_4_TCS;
+ RTE_ETH_32_POOLS)
+ nb_tcs = RTE_ETH_4_TCS;
else
- nb_tcs = ETH_8_TCS;
+ nb_tcs = RTE_ETH_8_TCS;
} else {
nb_tcs = 1;
}
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.h b/drivers/net/ixgbe/rte_pmd_ixgbe.h
index 90fc8160b1f8..eef6f6661c74 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.h
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.h
@@ -285,8 +285,8 @@ int rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an,
* @param rx_mask
* The RX mode mask, which is one or more of accepting Untagged Packets,
* packets that match the PFUTA table, Broadcast and Multicast Promiscuous.
-* ETH_VMDQ_ACCEPT_UNTAG,ETH_VMDQ_ACCEPT_HASH_UC,
-* ETH_VMDQ_ACCEPT_BROADCAST and ETH_VMDQ_ACCEPT_MULTICAST will be used
+* RTE_ETH_VMDQ_ACCEPT_UNTAG, RTE_ETH_VMDQ_ACCEPT_HASH_UC,
+* RTE_ETH_VMDQ_ACCEPT_BROADCAST and RTE_ETH_VMDQ_ACCEPT_MULTICAST will be used
* in rx_mode.
* @param on
* 1 - Enable a VF RX mode.
diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c
index 871d11c4133d..29060ca76f93 100644
--- a/drivers/net/kni/rte_eth_kni.c
+++ b/drivers/net/kni/rte_eth_kni.c
@@ -61,10 +61,10 @@ struct pmd_internals {
};
static const struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
static int is_kni_initialized;
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index b72060a4499b..e91f4c13c63b 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -384,15 +384,15 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
- devinfo->speed_capa = ETH_LINK_SPEED_10G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
break;
/* CN23xx 25G cards */
case PCI_SUBSYS_DEV_ID_CN2350_225:
case PCI_SUBSYS_DEV_ID_CN2360_225:
- devinfo->speed_capa = ETH_LINK_SPEED_25G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
break;
default:
- devinfo->speed_capa = ETH_LINK_SPEED_10G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
lio_dev_err(lio_dev,
"Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
return -EINVAL;
@@ -406,27 +406,27 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
devinfo->max_mac_addrs = 1;
- devinfo->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_RSS_HASH);
- devinfo->tx_offload_capa = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM);
+ devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
+ devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
devinfo->rx_desc_lim = lio_rx_desc_lim;
devinfo->tx_desc_lim = lio_tx_desc_lim;
devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
- devinfo->flow_type_rss_offloads = (ETH_RSS_IPV4 |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_IPV6 |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_EX |
- ETH_RSS_IPV6_TCP_EX);
+ devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_EX |
+ RTE_ETH_RSS_IPV6_TCP_EX);
return 0;
}
@@ -483,10 +483,10 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (frame_len > LIO_ETH_MAX_LEN)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_len;
eth_dev->data->mtu = mtu;
@@ -540,10 +540,10 @@ lio_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
rss_param->param.flags &= ~LIO_RSS_PARAM_ITABLE_UNCHANGED;
rss_param->param.itablesize = LIO_RSS_MAX_TABLE_SZ;
- for (i = 0; i < (reta_size / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (i = 0; i < (reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if ((reta_conf[i].mask) & ((uint64_t)1 << j)) {
- index = (i * RTE_RETA_GROUP_SIZE) + j;
+ index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
rss_state->itable[index] = reta_conf[i].reta[j];
}
}
@@ -583,12 +583,12 @@ lio_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- num = reta_size / RTE_RETA_GROUP_SIZE;
+ num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < num; i++) {
memcpy(reta_conf->reta,
- &rss_state->itable[i * RTE_RETA_GROUP_SIZE],
- RTE_RETA_GROUP_SIZE);
+ &rss_state->itable[i * RTE_ETH_RETA_GROUP_SIZE],
+ RTE_ETH_RETA_GROUP_SIZE);
reta_conf++;
}
@@ -616,17 +616,17 @@ lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
if (rss_state->ip)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (rss_state->tcp_hash)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (rss_state->ipv6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (rss_state->ipv6_tcp_hash)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (rss_state->ipv6_ex)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (rss_state->ipv6_tcp_ex_hash)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
rss_conf->rss_hf = rss_hf;
@@ -694,42 +694,42 @@ lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
if (rss_state->hash_disable)
return -EINVAL;
- if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
hashinfo |= LIO_RSS_HASH_IPV4;
rss_state->ip = 1;
} else {
rss_state->ip = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
hashinfo |= LIO_RSS_HASH_TCP_IPV4;
rss_state->tcp_hash = 1;
} else {
rss_state->tcp_hash = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_IPV6) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
hashinfo |= LIO_RSS_HASH_IPV6;
rss_state->ipv6 = 1;
} else {
rss_state->ipv6 = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
hashinfo |= LIO_RSS_HASH_TCP_IPV6;
rss_state->ipv6_tcp_hash = 1;
} else {
rss_state->ipv6_tcp_hash = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_IPV6_EX) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
hashinfo |= LIO_RSS_HASH_IPV6_EX;
rss_state->ipv6_ex = 1;
} else {
rss_state->ipv6_ex = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
rss_state->ipv6_tcp_ex_hash = 1;
} else {
@@ -778,7 +778,7 @@ lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
if (udp_tnl == NULL)
return -EINVAL;
- if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+ if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
lio_dev_err(lio_dev, "Unsupported tunnel type\n");
return -1;
}
@@ -835,7 +835,7 @@ lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
if (udp_tnl == NULL)
return -EINVAL;
- if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+ if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
lio_dev_err(lio_dev, "Unsupported tunnel type\n");
return -1;
}
@@ -933,10 +933,10 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
/* Initialize */
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
/* Return what we found */
if (lio_dev->linfo.link.s.link_up == 0) {
@@ -944,18 +944,18 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
return rte_eth_linkstatus_set(eth_dev, &link);
}
- link.link_status = ETH_LINK_UP; /* Interface is up */
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (lio_dev->linfo.link.s.speed) {
case LIO_LINK_SPEED_10000:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case LIO_LINK_SPEED_25000:
- link.link_speed = ETH_SPEED_NUM_25G;
+ link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
default:
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
}
return rte_eth_linkstatus_set(eth_dev, &link);
@@ -1107,8 +1107,8 @@ lio_dev_rss_configure(struct rte_eth_dev *eth_dev)
q_idx = (uint8_t)((eth_dev->data->nb_rx_queues > 1) ?
i % eth_dev->data->nb_rx_queues : 0);
- conf_idx = i / RTE_RETA_GROUP_SIZE;
- reta_idx = i % RTE_RETA_GROUP_SIZE;
+ conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
reta_conf[conf_idx].reta[reta_idx] = q_idx;
reta_conf[conf_idx].mask |= ((uint64_t)1 << reta_idx);
}
@@ -1124,10 +1124,10 @@ lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
struct rte_eth_rss_conf rss_conf;
switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
lio_dev_rss_configure(eth_dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/* if mq_mode is none, disable rss mode. */
default:
memset(&rss_conf, 0, sizeof(rss_conf));
@@ -1509,7 +1509,7 @@ lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
}
lio_dev->linfo.link.s.link_up = 1;
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -1530,11 +1530,11 @@ lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
}
lio_dev->linfo.link.s.link_up = 0;
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
lio_dev->linfo.link.s.link_up = 1;
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
lio_dev_err(lio_dev, "Unable to set Link Down\n");
return -1;
}
@@ -1746,9 +1746,9 @@ lio_dev_configure(struct rte_eth_dev *eth_dev)
PMD_INIT_FUNC_TRACE();
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Inform firmware about change in number of queues to use.
* Disable IO queues and reset registers for re-configuration.
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index f58ff4c0cb77..a117a05228fc 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -525,7 +525,7 @@ memif_disconnect(struct rte_eth_dev *dev)
int i;
int ret;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTED;
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index de6becd45e3e..ea66f5bfd452 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -55,10 +55,10 @@ static const char * const valid_arguments[] = {
};
static const struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_AUTONEG
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG
};
#define MEMIF_MP_SEND_REGION "memif_mp_send_region"
@@ -1216,7 +1216,7 @@ memif_connect(struct rte_eth_dev *dev)
pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
pmd->flags |= ETH_MEMIF_FLAG_CONNECTED;
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
}
MIF_LOG(INFO, "Connected.");
return 0;
@@ -1367,10 +1367,10 @@ memif_link_update(struct rte_eth_dev *dev,
if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
proc_private = dev->process_private;
- if (dev->data->dev_link.link_status == ETH_LINK_UP &&
+ if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP &&
proc_private->regions_num == 0) {
memif_mp_request_regions(dev);
- } else if (dev->data->dev_link.link_status == ETH_LINK_DOWN &&
+ } else if (dev->data->dev_link.link_status == RTE_ETH_LINK_DOWN &&
proc_private->regions_num > 0) {
memif_free_regions(dev);
}
diff --git a/drivers/net/mlx4/mlx4_ethdev.c b/drivers/net/mlx4/mlx4_ethdev.c
index 783ff94dce8d..d606ec8ca76d 100644
--- a/drivers/net/mlx4/mlx4_ethdev.c
+++ b/drivers/net/mlx4/mlx4_ethdev.c
@@ -657,11 +657,11 @@ mlx4_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
info->if_index = priv->if_index;
info->hash_key_size = MLX4_RSS_HASH_KEY_SIZE;
info->speed_capa =
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_20G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_56G;
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_20G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_56G;
info->flow_type_rss_offloads = mlx4_conv_rss_types(priv, 0, 1);
return 0;
@@ -821,13 +821,13 @@ mlx4_link_update(struct rte_eth_dev *dev, int wait_to_complete)
}
link_speed = ethtool_cmd_speed(&edata);
if (link_speed == -1)
- dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
else
dev_link.link_speed = link_speed;
dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
- ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
dev->data->dev_link = dev_link;
return 0;
}
@@ -863,13 +863,13 @@ mlx4_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
fc_conf->autoneg = ethpause.autoneg;
if (ethpause.rx_pause && ethpause.tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (ethpause.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (ethpause.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
ret = 0;
out:
MLX4_ASSERT(ret >= 0);
@@ -899,13 +899,13 @@ mlx4_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
ifr.ifr_data = (void *)ðpause;
ethpause.autoneg = fc_conf->autoneg;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_RX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
ethpause.rx_pause = 1;
else
ethpause.rx_pause = 0;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_TX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
ethpause.tx_pause = 1;
else
ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 71ea91b3fb82..2e1b6c87e983 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -109,21 +109,21 @@ mlx4_conv_rss_types(struct mlx4_priv *priv, uint64_t types, int verbs_to_dpdk)
};
static const uint64_t dpdk[] = {
[INNER] = 0,
- [IPV4] = ETH_RSS_IPV4,
- [IPV4_1] = ETH_RSS_FRAG_IPV4,
- [IPV4_2] = ETH_RSS_NONFRAG_IPV4_OTHER,
- [IPV6] = ETH_RSS_IPV6,
- [IPV6_1] = ETH_RSS_FRAG_IPV6,
- [IPV6_2] = ETH_RSS_NONFRAG_IPV6_OTHER,
- [IPV6_3] = ETH_RSS_IPV6_EX,
+ [IPV4] = RTE_ETH_RSS_IPV4,
+ [IPV4_1] = RTE_ETH_RSS_FRAG_IPV4,
+ [IPV4_2] = RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+ [IPV6] = RTE_ETH_RSS_IPV6,
+ [IPV6_1] = RTE_ETH_RSS_FRAG_IPV6,
+ [IPV6_2] = RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+ [IPV6_3] = RTE_ETH_RSS_IPV6_EX,
[TCP] = 0,
[UDP] = 0,
- [IPV4_TCP] = ETH_RSS_NONFRAG_IPV4_TCP,
- [IPV4_UDP] = ETH_RSS_NONFRAG_IPV4_UDP,
- [IPV6_TCP] = ETH_RSS_NONFRAG_IPV6_TCP,
- [IPV6_TCP_1] = ETH_RSS_IPV6_TCP_EX,
- [IPV6_UDP] = ETH_RSS_NONFRAG_IPV6_UDP,
- [IPV6_UDP_1] = ETH_RSS_IPV6_UDP_EX,
+ [IPV4_TCP] = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+ [IPV4_UDP] = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+ [IPV6_TCP] = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+ [IPV6_TCP_1] = RTE_ETH_RSS_IPV6_TCP_EX,
+ [IPV6_UDP] = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ [IPV6_UDP_1] = RTE_ETH_RSS_IPV6_UDP_EX,
};
static const uint64_t verbs[RTE_DIM(dpdk)] = {
[INNER] = IBV_RX_HASH_INNER,
@@ -1283,7 +1283,7 @@ mlx4_flow_internal_next_vlan(struct mlx4_priv *priv, uint16_t vlan)
* - MAC flow rules are generated from @p dev->data->mac_addrs
* (@p priv->mac array).
* - An additional flow rule for Ethernet broadcasts is also generated.
- * - All these are per-VLAN if @p DEV_RX_OFFLOAD_VLAN_FILTER
+ * - All these are per-VLAN if @p RTE_ETH_RX_OFFLOAD_VLAN_FILTER
* is enabled and VLAN filters are configured.
*
* @param priv
@@ -1358,7 +1358,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
struct rte_ether_addr *rule_mac = ð_spec.dst;
rte_be16_t *rule_vlan =
(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) &&
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
!ETH_DEV(priv)->data->promiscuous ?
&vlan_spec.tci :
NULL;
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index d56009c41845..2aab0f60a7b5 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -118,7 +118,7 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
static void
mlx4_link_status_alarm(struct mlx4_priv *priv)
{
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
MLX4_ASSERT(priv->intr_alarm == 1);
@@ -183,7 +183,7 @@ mlx4_interrupt_handler(struct mlx4_priv *priv)
};
uint32_t caught[RTE_DIM(type)] = { 0 };
struct ibv_async_event event;
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
unsigned int i;
@@ -280,7 +280,7 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
int
mlx4_intr_install(struct mlx4_priv *priv)
{
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
int rc;
@@ -386,7 +386,7 @@ mlx4_rx_intr_enable(struct rte_eth_dev *dev, uint16_t idx)
int
mlx4_rxq_intr_enable(struct mlx4_priv *priv)
{
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
if (intr_conf->rxq && mlx4_rx_intr_vec_enable(priv) < 0)
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 978cbb8201ea..9977c761880a 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -682,13 +682,13 @@ mlx4_rxq_detach(struct rxq *rxq)
uint64_t
mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
{
- uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_RSS_HASH;
+ uint64_t offloads = RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (priv->hw_csum)
- offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
return offloads;
}
@@ -704,7 +704,7 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
uint64_t
mlx4_get_rx_port_offloads(struct mlx4_priv *priv)
{
- uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+ uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
(void)priv;
return offloads;
@@ -785,7 +785,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
}
/* By default, FCS (CRC) is stripped by hardware. */
crc_present = 0;
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
if (priv->hw_fcs_strip) {
crc_present = 1;
} else {
@@ -816,9 +816,9 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
.elts = elts,
/* Toggle Rx checksum offload if hardware supports it. */
.csum = priv->hw_csum &&
- (offloads & DEV_RX_OFFLOAD_CHECKSUM),
+ (offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
.csum_l2tun = priv->hw_csum_l2tun &&
- (offloads & DEV_RX_OFFLOAD_CHECKSUM),
+ (offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
.crc_present = crc_present,
.l2tun_offload = priv->hw_csum_l2tun,
.stats = {
@@ -831,7 +831,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
(mb_len - RTE_PKTMBUF_HEADROOM)) {
;
- } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+ } else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
uint32_t size =
RTE_PKTMBUF_HEADROOM +
dev->data->dev_conf.rxmode.max_rx_pkt_len;
diff --git a/drivers/net/mlx4/mlx4_txq.c b/drivers/net/mlx4/mlx4_txq.c
index 2df26842fbe4..19feec5e5202 100644
--- a/drivers/net/mlx4/mlx4_txq.c
+++ b/drivers/net/mlx4/mlx4_txq.c
@@ -273,20 +273,20 @@ mlx4_txq_fill_dv_obj_info(struct txq *txq, struct mlx4dv_obj *mlxdv)
uint64_t
mlx4_get_tx_port_offloads(struct mlx4_priv *priv)
{
- uint64_t offloads = DEV_TX_OFFLOAD_MULTI_SEGS;
+ uint64_t offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
if (priv->hw_csum) {
- offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM);
+ offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
}
if (priv->tso)
- offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (priv->hw_csum_l2tun) {
- offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (priv->tso)
- offloads |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO);
+ offloads |= (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
}
return offloads;
}
@@ -394,12 +394,12 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
.elts_comp_cd_init =
RTE_MIN(MLX4_PMD_TX_PER_COMP_REQ, desc / 4),
.csum = priv->hw_csum &&
- (offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM)),
+ (offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)),
.csum_l2tun = priv->hw_csum_l2tun &&
(offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM),
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM),
/* Enable Tx loopback for VF devices. */
.lb = !!priv->vf,
.bounce_buf = bounce_buf,
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index f34133e2c641..79e27fe2d668 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -439,24 +439,24 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev,
}
link_speed = ethtool_cmd_speed(&edata);
if (link_speed == -1)
- dev_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ dev_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
else
dev_link.link_speed = link_speed;
priv->link_speed_capa = 0;
if (edata.supported & (SUPPORTED_1000baseT_Full |
SUPPORTED_1000baseKX_Full))
- priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (edata.supported & SUPPORTED_10000baseKR_Full)
- priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (edata.supported & (SUPPORTED_40000baseKR4_Full |
SUPPORTED_40000baseCR4_Full |
SUPPORTED_40000baseSR4_Full |
SUPPORTED_40000baseLR4_Full))
- priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
- ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
*link = dev_link;
return 0;
}
@@ -545,45 +545,45 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
return ret;
}
dev_link.link_speed = (ecmd->speed == UINT32_MAX) ?
- ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
+ RTE_ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
sc = ecmd->link_mode_masks[0] |
((uint64_t)ecmd->link_mode_masks[1] << 32);
priv->link_speed_capa = 0;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseT_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseKX_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKR_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseR_FEC_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_20G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_20G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseCR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseSR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseLR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_56G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_56G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseCR_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseKR_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseSR_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_25G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_50G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_100G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
sc = ecmd->link_mode_masks[2] |
((uint64_t)ecmd->link_mode_masks[3] << 32);
@@ -591,11 +591,11 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
MLX5_BITSHIFT
(ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseDR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
dev_link.link_duplex = ((ecmd->duplex == DUPLEX_HALF) ?
- ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
*link = dev_link;
return 0;
}
@@ -677,13 +677,13 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
fc_conf->autoneg = ethpause.autoneg;
if (ethpause.rx_pause && ethpause.tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (ethpause.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (ethpause.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -709,14 +709,14 @@ mlx5_dev_set_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
ifr.ifr_data = (void *)ðpause;
ethpause.autoneg = fc_conf->autoneg;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_RX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
ethpause.rx_pause = 1;
else
ethpause.rx_pause = 0;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_TX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
ethpause.tx_pause = 1;
else
ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 5f8766aa481e..c40cda8fcaf9 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1343,8 +1343,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
* Remove this check once DPDK supports larger/variable
* indirection tables.
*/
- if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
- config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+ if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+ config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
config->ind_table_max_size);
config->hw_vlan_strip = !!(sh->device_attr.raw_packet_caps &
@@ -1627,7 +1627,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
/*
* If HW has bug working with tunnel packet decapsulation and
* scatter FCS, and decapsulation is needed, clear the hw_fcs_strip
- * bit. Then DEV_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
+ * bit. Then RTE_ETH_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
*/
if (config->hca_attr.scatter_fcs_w_decap_disable && config->decap_en)
config->hw_fcs_strip = 0;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index f84e061fe719..ff1c8e17460a 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1463,10 +1463,10 @@ mlx5_udp_tunnel_port_add(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_udp_tunnel *udp_tunnel)
{
MLX5_ASSERT(udp_tunnel != NULL);
- if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN &&
+ if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN &&
udp_tunnel->udp_port == 4789)
return 0;
- if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN_GPE &&
+ if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN_GPE &&
udp_tunnel->udp_port == 4790)
return 0;
return -ENOTSUP;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index e02714e23196..9588dff05180 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1226,7 +1226,7 @@ TAILQ_HEAD(mlx5_legacy_flow_meters, mlx5_legacy_flow_meter);
struct mlx5_flow_rss_desc {
uint32_t level;
uint32_t queue_num; /**< Number of entries in @p queue. */
- uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
uint64_t hash_fields; /* Verbs Hash fields. */
uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
uint32_t key_len; /**< RSS hash key len. */
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index fe86bb40d351..12ddf4c7ff28 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -90,11 +90,11 @@
#define MLX5_VPMD_DESCS_PER_LOOP 4
/* Mask of RSS on source only or destination only. */
-#define MLX5_RSS_SRC_DST_ONLY (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define MLX5_RSS_SRC_DST_ONLY (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
/* Supported RSS */
-#define MLX5_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP | \
+#define MLX5_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP | \
MLX5_RSS_SRC_DST_ONLY))
/* Timeout in seconds to get a valid link status. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 82e2284d9866..f2b78c3cc69e 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -91,7 +91,7 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
}
if ((dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
+ RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
rte_mbuf_dyn_tx_timestamp_register(NULL, NULL) != 0) {
DRV_LOG(ERR, "port %u cannot register Tx timestamp field/flag",
dev->data->port_id);
@@ -225,8 +225,8 @@ mlx5_set_default_params(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
info->default_txportconf.ring_size = 256;
info->default_rxportconf.burst_size = MLX5_RX_DEFAULT_BURST;
info->default_txportconf.burst_size = MLX5_TX_DEFAULT_BURST;
- if ((priv->link_speed_capa & ETH_LINK_SPEED_200G) |
- (priv->link_speed_capa & ETH_LINK_SPEED_100G)) {
+ if ((priv->link_speed_capa & RTE_ETH_LINK_SPEED_200G) |
+ (priv->link_speed_capa & RTE_ETH_LINK_SPEED_100G)) {
info->default_rxportconf.nb_queues = 16;
info->default_txportconf.nb_queues = 16;
if (dev->data->nb_rx_queues > 2 ||
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 4762fa0f5f88..7048fff3883e 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -98,7 +98,7 @@ struct mlx5_flow_expand_node {
uint64_t rss_types;
/**<
* RSS types bit-field associated with this node
- * (see ETH_RSS_* definitions).
+ * (see RTE_ETH_RSS_* definitions).
*/
uint64_t node_flags;
/**<
@@ -272,7 +272,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
* @param[in] pattern
* User flow pattern.
* @param[in] types
- * RSS types to expand (see ETH_RSS_* definitions).
+ * RSS types to expand (see RTE_ETH_RSS_* definitions).
* @param[in] graph
* Input graph to expand @p pattern according to @p types.
* @param[in] graph_root_index
@@ -522,8 +522,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_IPV4,
MLX5_EXPANSION_IPV6),
.type = RTE_FLOW_ITEM_TYPE_IPV4,
- .rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
},
[MLX5_EXPANSION_OUTER_IPV4_UDP] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -531,11 +531,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_MPLS,
MLX5_EXPANSION_GTP),
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
},
[MLX5_EXPANSION_OUTER_IPV4_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
},
[MLX5_EXPANSION_OUTER_IPV6] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT
@@ -546,8 +546,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_GRE,
MLX5_EXPANSION_NVGRE),
.type = RTE_FLOW_ITEM_TYPE_IPV6,
- .rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
},
[MLX5_EXPANSION_OUTER_IPV6_UDP] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -555,11 +555,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_MPLS,
MLX5_EXPANSION_GTP),
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
},
[MLX5_EXPANSION_OUTER_IPV6_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
},
[MLX5_EXPANSION_VXLAN] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_ETH,
@@ -612,32 +612,32 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4_UDP,
MLX5_EXPANSION_IPV4_TCP),
.type = RTE_FLOW_ITEM_TYPE_IPV4,
- .rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
},
[MLX5_EXPANSION_IPV4_UDP] = {
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
},
[MLX5_EXPANSION_IPV4_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
},
[MLX5_EXPANSION_IPV6] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV6_UDP,
MLX5_EXPANSION_IPV6_TCP,
MLX5_EXPANSION_IPV6_FRAG_EXT),
.type = RTE_FLOW_ITEM_TYPE_IPV6,
- .rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
},
[MLX5_EXPANSION_IPV6_UDP] = {
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
},
[MLX5_EXPANSION_IPV6_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
},
[MLX5_EXPANSION_IPV6_FRAG_EXT] = {
.type = RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
@@ -1048,7 +1048,7 @@ mlx5_flow_item_acceptable(const struct rte_flow_item *item,
* @param[in] tunnel
* 1 when the hash field is for a tunnel item.
* @param[in] layer_types
- * ETH_RSS_* types.
+ * RTE_ETH_RSS_* types.
* @param[in] hash_fields
* Item hash fields.
*
@@ -1601,14 +1601,14 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev,
&rss->types,
"some RSS protocols are not"
" supported");
- if ((rss->types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) &&
- !(rss->types & ETH_RSS_IP))
+ if ((rss->types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) &&
+ !(rss->types & RTE_ETH_RSS_IP))
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
"L3 partial RSS requested but L3 RSS"
" type not specified");
- if ((rss->types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) &&
- !(rss->types & (ETH_RSS_UDP | ETH_RSS_TCP)))
+ if ((rss->types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) &&
+ !(rss->types & (RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP)))
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
"L4 partial RSS requested but L4 RSS"
@@ -6364,8 +6364,8 @@ flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type,
* mlx5_flow_hashfields_adjust() in advance.
*/
rss_desc->level = rss->level;
- /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
- rss_desc->types = !rss->types ? ETH_RSS_IP : rss->types;
+ /* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+ rss_desc->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
}
flow->dev_handles = 0;
if (rss && rss->types) {
@@ -6989,7 +6989,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev,
if (!priv->reta_idx_n || !priv->rxqs_n) {
return 0;
}
- if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+ if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
action_rss.types = 0;
for (i = 0; i != priv->reta_idx_n; ++i)
queue[i] = (*priv->reta_idx)[i];
@@ -8657,7 +8657,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF,
NULL, "invalid port configuration");
- if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+ if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
ctx->action_rss.types = 0;
for (i = 0; i != priv->reta_idx_n; ++i)
ctx->queue[i] = (*priv->reta_idx)[i];
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 76ad53f2a1e8..d5d3a89374fe 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -328,18 +328,18 @@ enum mlx5_feature_name {
/* Valid layer type for IPV4 RSS. */
#define MLX5_IPV4_LAYER_TYPES \
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_OTHER)
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
/* IBV hash source bits for IPV4. */
#define MLX5_IPV4_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV4 | IBV_RX_HASH_DST_IPV4)
/* Valid layer type for IPV6 RSS. */
#define MLX5_IPV6_LAYER_TYPES \
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_EX | ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX | ETH_RSS_NONFRAG_IPV6_OTHER)
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX | RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
/* IBV hash source bits for IPV6. */
#define MLX5_IPV6_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV6 | IBV_RX_HASH_DST_IPV6)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 3f6f5dcfbadb..02a337dc2c93 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -10934,9 +10934,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV4)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV4))) {
if (rss_types & MLX5_IPV4_LAYER_TYPES) {
- if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV4;
- else if (rss_types & ETH_RSS_L3_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV4;
else
dev_flow->hash_fields |= MLX5_IPV4_IBV_RX_HASH;
@@ -10944,9 +10944,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV6)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV6))) {
if (rss_types & MLX5_IPV6_LAYER_TYPES) {
- if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV6;
- else if (rss_types & ETH_RSS_L3_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV6;
else
dev_flow->hash_fields |= MLX5_IPV6_IBV_RX_HASH;
@@ -10960,11 +10960,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
return;
if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_UDP)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_UDP))) {
- if (rss_types & ETH_RSS_UDP) {
- if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_UDP) {
+ if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_SRC_PORT_UDP;
- else if (rss_types & ETH_RSS_L4_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_DST_PORT_UDP;
else
@@ -10972,11 +10972,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
}
} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_TCP)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_TCP))) {
- if (rss_types & ETH_RSS_TCP) {
- if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_TCP) {
+ if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_SRC_PORT_TCP;
- else if (rss_types & ETH_RSS_L4_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_DST_PORT_TCP;
else
@@ -14495,9 +14495,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV4:
if (rss_types & MLX5_IPV4_LAYER_TYPES) {
*hash_field &= ~MLX5_RSS_HASH_IPV4;
- if (rss_types & ETH_RSS_L3_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_IPV4;
- else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_IPV4;
else
*hash_field |= MLX5_RSS_HASH_IPV4;
@@ -14506,9 +14506,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV6:
if (rss_types & MLX5_IPV6_LAYER_TYPES) {
*hash_field &= ~MLX5_RSS_HASH_IPV6;
- if (rss_types & ETH_RSS_L3_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_IPV6;
- else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_IPV6;
else
*hash_field |= MLX5_RSS_HASH_IPV6;
@@ -14517,11 +14517,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV4_UDP:
/* fall-through. */
case MLX5_RSS_HASH_IPV6_UDP:
- if (rss_types & ETH_RSS_UDP) {
+ if (rss_types & RTE_ETH_RSS_UDP) {
*hash_field &= ~MLX5_UDP_IBV_RX_HASH;
- if (rss_types & ETH_RSS_L4_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_PORT_UDP;
- else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_PORT_UDP;
else
*hash_field |= MLX5_UDP_IBV_RX_HASH;
@@ -14530,11 +14530,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV4_TCP:
/* fall-through. */
case MLX5_RSS_HASH_IPV6_TCP:
- if (rss_types & ETH_RSS_TCP) {
+ if (rss_types & RTE_ETH_RSS_TCP) {
*hash_field &= ~MLX5_TCP_IBV_RX_HASH;
- if (rss_types & ETH_RSS_L4_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_PORT_TCP;
- else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_PORT_TCP;
else
*hash_field |= MLX5_TCP_IBV_RX_HASH;
@@ -14682,8 +14682,8 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev,
origin = &shared_rss->origin;
origin->func = rss->func;
origin->level = rss->level;
- /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
- origin->types = !rss->types ? ETH_RSS_IP : rss->types;
+ /* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+ origin->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
/* NULL RSS key indicates default RSS key. */
rss_key = !rss->key ? rss_hash_default_key : rss->key;
memcpy(shared_rss->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index b93fd4d2c962..ef286a13729c 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1834,7 +1834,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
if (dev_flow->hash_fields != 0)
dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
- (rss_desc, tunnel, ETH_RSS_TCP,
+ (rss_desc, tunnel, RTE_ETH_RSS_TCP,
(IBV_RX_HASH_SRC_PORT_TCP |
IBV_RX_HASH_DST_PORT_TCP));
item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
@@ -1847,7 +1847,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
if (dev_flow->hash_fields != 0)
dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
- (rss_desc, tunnel, ETH_RSS_UDP,
+ (rss_desc, tunnel, RTE_ETH_RSS_UDP,
(IBV_RX_HASH_SRC_PORT_UDP |
IBV_RX_HASH_DST_PORT_UDP));
item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
--git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c
index c32129cdc2b8..a4f690039e24 100644
--- a/drivers/net/mlx5/mlx5_rss.c
+++ b/drivers/net/mlx5/mlx5_rss.c
@@ -68,7 +68,7 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev,
if (!(*priv->rxqs)[i])
continue;
(*priv->rxqs)[i]->rss_hash = !!rss_conf->rss_hf &&
- !!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS);
+ !!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS);
++idx;
}
return 0;
@@ -170,8 +170,8 @@ mlx5_dev_rss_reta_query(struct rte_eth_dev *dev,
}
/* Fill each entry of the table even if its bit is not set. */
for (idx = 0, i = 0; (i != reta_size); ++i) {
- idx = i / RTE_RETA_GROUP_SIZE;
- reta_conf[idx].reta[i % RTE_RETA_GROUP_SIZE] =
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] =
(*priv->reta_idx)[i];
}
return 0;
@@ -209,8 +209,8 @@ mlx5_dev_rss_reta_update(struct rte_eth_dev *dev,
if (ret)
return ret;
for (idx = 0, i = 0; (i != reta_size); ++i) {
- idx = i / RTE_RETA_GROUP_SIZE;
- pos = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ pos = i % RTE_ETH_RETA_GROUP_SIZE;
if (((reta_conf[idx].mask >> i) & 0x1) == 0)
continue;
MLX5_ASSERT(reta_conf[idx].reta[pos] < priv->rxqs_n);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index abd8ce798986..0d6c58f47d89 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -333,23 +333,23 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_dev_config *config = &priv->config;
- uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_RSS_HASH);
+ uint64_t offloads = (RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
if (!config->mprq.enabled)
offloads |= RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT;
if (config->hw_fcs_strip)
- offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
if (config->hw_csum)
- offloads |= (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM);
+ offloads |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
if (config->hw_vlan_strip)
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (MLX5_LRO_SUPPORTED(dev))
- offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
return offloads;
}
@@ -363,7 +363,7 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
uint64_t
mlx5_get_rx_port_offloads(void)
{
- uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+ uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
return offloads;
}
@@ -695,7 +695,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
dev->data->dev_conf.rxmode.offloads;
/* The offloads should be checked on rte_eth_dev layer. */
- MLX5_ASSERT(offloads & DEV_RX_OFFLOAD_SCATTER);
+ MLX5_ASSERT(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
DRV_LOG(ERR, "port %u queue index %u split "
"offload not configured",
@@ -1329,7 +1329,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mlx5_dev_config *config = &priv->config;
uint64_t offloads = conf->offloads |
dev->data->dev_conf.rxmode.offloads;
- unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
+ unsigned int lro_on_queue = !!(offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO);
unsigned int max_rx_pkt_len = lro_on_queue ?
dev->data->dev_conf.rxmode.max_lro_pkt_size :
dev->data->dev_conf.rxmode.max_rx_pkt_len;
@@ -1431,7 +1431,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
} while (tail_len || !rte_is_power_of_2(tmpl->rxq.rxseg_n));
MLX5_ASSERT(tmpl->rxq.rxseg_n &&
tmpl->rxq.rxseg_n <= MLX5_MAX_RXQ_NSEG);
- if (tmpl->rxq.rxseg_n > 1 && !(offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ if (tmpl->rxq.rxseg_n > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
" configured and no enough mbuf space(%u) to contain "
"the maximum RX packet length(%u) with head-room(%u)",
@@ -1475,7 +1475,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
config->mprq.stride_size_n : mprq_stride_size;
tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT;
tmpl->rxq.strd_scatter_en =
- !!(offloads & DEV_RX_OFFLOAD_SCATTER);
+ !!(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
config->mprq.max_memcpy_len);
max_lro_size = RTE_MIN(max_rx_pkt_len,
@@ -1490,7 +1490,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size);
tmpl->rxq.sges_n = 0;
max_lro_size = max_rx_pkt_len;
- } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+ } else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
unsigned int sges_n;
if (lro_on_queue && first_mb_free_size <
@@ -1551,9 +1551,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
}
mlx5_max_lro_msg_size_adjust(dev, idx, max_lro_size);
/* Toggle RX checksum offload if hardware supports it. */
- tmpl->rxq.csum = !!(offloads & DEV_RX_OFFLOAD_CHECKSUM);
+ tmpl->rxq.csum = !!(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM);
/* Configure Rx timestamp. */
- tmpl->rxq.hw_timestamp = !!(offloads & DEV_RX_OFFLOAD_TIMESTAMP);
+ tmpl->rxq.hw_timestamp = !!(offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP);
tmpl->rxq.timestamp_rx_flag = 0;
if (tmpl->rxq.hw_timestamp && rte_mbuf_dyn_rx_timestamp_register(
&tmpl->rxq.timestamp_offset,
@@ -1562,11 +1562,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
goto error;
}
/* Configure VLAN stripping. */
- tmpl->rxq.vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ tmpl->rxq.vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
/* By default, FCS (CRC) is stripped by hardware. */
tmpl->rxq.crc_present = 0;
tmpl->rxq.lro = lro_on_queue;
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
if (config->hw_fcs_strip) {
/*
* RQs used for LRO-enabled TIRs should not be
@@ -1596,7 +1596,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
tmpl->rxq.crc_present << 2);
/* Save port ID. */
tmpl->rxq.rss_hash = !!priv->rss_conf.rss_hf &&
- (!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS));
+ (!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS));
tmpl->rxq.port_id = dev->data->port_id;
tmpl->priv = priv;
tmpl->rxq.mp = rx_seg[0].mp;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.h b/drivers/net/mlx5/mlx5_rxtx_vec.h
index 93b4f517bb3e..65d91bdf67e2 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.h
@@ -16,10 +16,10 @@
/* HW checksum offload capabilities of vectorized Tx. */
#define MLX5_VEC_TX_CKSUM_OFFLOAD_CAP \
- (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
/*
* Compile time sanity check for vectorized functions.
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index df671379e46d..12aeba60348a 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -523,36 +523,36 @@ mlx5_select_tx_function(struct rte_eth_dev *dev)
unsigned int diff = 0, olx = 0, i, m;
MLX5_ASSERT(priv);
- if (tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
/* We should support Multi-Segment Packets. */
olx |= MLX5_TXOFF_CONFIG_MULTI;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO)) {
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO)) {
/* We should support TCP Send Offload. */
olx |= MLX5_TXOFF_CONFIG_TSO;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
/* We should support Software Parser for Tunnels. */
olx |= MLX5_TXOFF_CONFIG_SWP;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
/* We should support IP/TCP/UDP Checksums. */
olx |= MLX5_TXOFF_CONFIG_CSUM;
}
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) {
/* We should support VLAN insertion. */
olx |= MLX5_TXOFF_CONFIG_VLAN;
}
- if (tx_offloads & DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
rte_mbuf_dynflag_lookup
(RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME, NULL) >= 0 &&
rte_mbuf_dynfield_lookup
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index eb4d34ca559e..06cdeba662bc 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -98,35 +98,35 @@ uint64_t
mlx5_get_tx_port_offloads(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
- uint64_t offloads = (DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT);
+ uint64_t offloads = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
struct mlx5_dev_config *config = &priv->config;
if (config->hw_csum)
- offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM);
+ offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
if (config->tso)
- offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (config->tx_pp)
- offloads |= DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP;
+ offloads |= RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP;
if (config->swp) {
if (config->hw_csum)
- offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (config->tso)
- offloads |= (DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ offloads |= (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
}
if (config->tunnel_en) {
if (config->hw_csum)
- offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (config->tso)
- offloads |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO);
+ offloads |= (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO);
}
if (!config->mprq.enabled)
- offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
return offloads;
}
@@ -801,17 +801,17 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
unsigned int inlen_mode; /* Minimal required Inline data. */
unsigned int txqs_inline; /* Min Tx queues to enable inline. */
uint64_t dev_txoff = priv->dev_data->dev_conf.txmode.offloads;
- bool tso = txq_ctrl->txq.offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ bool tso = txq_ctrl->txq.offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
bool vlan_inline;
unsigned int temp;
txq_ctrl->txq.fast_free =
- !!((txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
- !(txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MULTI_SEGS) &&
+ !!((txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
+ !(txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) &&
!config->mprq.enabled);
if (config->txqs_inline == MLX5_ARG_UNSET)
txqs_inline =
@@ -870,7 +870,7 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
* tx_burst routine.
*/
txq_ctrl->txq.vlan_en = config->hw_vlan_insert;
- vlan_inline = (dev_txoff & DEV_TX_OFFLOAD_VLAN_INSERT) &&
+ vlan_inline = (dev_txoff & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) &&
!config->hw_vlan_insert;
/*
* If there are few Tx queues it is prioritized
@@ -979,9 +979,9 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
txq_ctrl->txq.tso_en = 1;
}
txq_ctrl->txq.tunnel_en = config->tunnel_en | config->swp;
- txq_ctrl->txq.swp_en = ((DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) &
+ txq_ctrl->txq.swp_en = ((RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) &
txq_ctrl->txq.offloads) && config->swp;
}
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 60f97f2d2d1f..07792fc5d94f 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -142,9 +142,9 @@ mlx5_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct mlx5_priv *priv = dev->data->dev_private;
unsigned int i;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
int hw_vlan_strip = !!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP);
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
if (!priv->config.hw_vlan_strip) {
DRV_LOG(ERR, "port %u VLAN stripping is not supported",
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 7e1df1c75147..578816fe0513 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -464,8 +464,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
* Remove this check once DPDK supports larger/variable
* indirection tables.
*/
- if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
- config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+ if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+ config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
config->ind_table_max_size);
DRV_LOG(DEBUG, "VLAN stripping is %ssupported",
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index a3ee15020466..37803fe34538 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -114,7 +114,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
struct mvneta_priv *priv = dev->data->dev_private;
struct neta_ppio_params *ppio_params;
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE) {
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE) {
MVNETA_LOG(INFO, "Unsupported RSS and rx multi queue mode %d",
dev->data->dev_conf.rxmode.mq_mode);
if (dev->data->nb_rx_queues > 1)
@@ -126,11 +126,11 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
MRVL_NETA_ETH_HDRS_LEN;
- if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
priv->multiseg = 1;
ppio_params = &priv->ppio_params;
@@ -155,10 +155,10 @@ static int
mvneta_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_dev_info *info)
{
- info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G;
+ info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G;
info->max_rx_queues = MRVL_NETA_RXQ_MAX;
info->max_tx_queues = MRVL_NETA_TXQ_MAX;
@@ -510,28 +510,28 @@ mvneta_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
switch (ethtool_cmd_speed(&edata)) {
case SPEED_10:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case SPEED_100:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case SPEED_1000:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case SPEED_2500:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
default:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
}
- dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
- dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
- ETH_LINK_FIXED;
+ dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
+ dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+ RTE_ETH_LINK_FIXED;
neta_ppio_get_link_state(priv->ppio, &link_up);
- dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
return 0;
}
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index ef8067790f82..ccd47e8f4927 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,15 +54,15 @@
#define MRVL_NETA_MRU_TO_MTU(mru) ((mru) - MRVL_NETA_HDRS_LEN)
/** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_CHECKSUM)
/** Tx offloads capabilities */
-#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MVNETA_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define MVNETA_TX_OFFLOADS (MVNETA_TX_OFFLOAD_CHECKSUM | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define MVNETA_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
PKT_TX_TCP_CKSUM | \
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index dfa7ecc09039..d28125ce9635 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -735,7 +735,7 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
rxq->priv = priv;
rxq->mp = mp;
rxq->cksum_enabled = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_IPV4_CKSUM;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
rxq->queue_id = idx;
rxq->port_id = dev->data->port_id;
rxq->size = desc;
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 078aefbb8da4..539e196b807e 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -58,16 +58,16 @@
#define MRVL_COOKIE_HIGH_ADDR_MASK 0xffffff0000000000
/** Port Rx offload capabilities */
-#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_CHECKSUM)
+#define MRVL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_CHECKSUM)
/** Port Tx offloads capabilities */
-#define MRVL_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MRVL_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define MRVL_TX_OFFLOADS (MRVL_TX_OFFLOAD_CHECKSUM | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define MRVL_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
PKT_TX_TCP_CKSUM | \
@@ -443,14 +443,14 @@ mrvl_configure_rss(struct mrvl_priv *priv, struct rte_eth_rss_conf *rss_conf)
if (rss_conf->rss_hf == 0) {
priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
- } else if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+ } else if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
priv->ppio_params.inqs_params.hash_type =
PP2_PPIO_HASH_T_2_TUPLE;
- } else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ } else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
priv->ppio_params.inqs_params.hash_type =
PP2_PPIO_HASH_T_5_TUPLE;
priv->rss_hf_tcp = 1;
- } else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ } else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
priv->ppio_params.inqs_params.hash_type =
PP2_PPIO_HASH_T_5_TUPLE;
priv->rss_hf_tcp = 0;
@@ -484,8 +484,8 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE &&
- dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE &&
+ dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
MRVL_LOG(INFO, "Unsupported rx multi queue mode %d",
dev->data->dev_conf.rxmode.mq_mode);
return -EINVAL;
@@ -496,7 +496,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
MRVL_PP2_ETH_HDRS_LEN;
if (dev->data->mtu > priv->max_mtu) {
@@ -508,7 +508,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
}
}
- if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
priv->multiseg = 1;
ret = mrvl_configure_rxqs(priv, dev->data->port_id,
@@ -530,7 +530,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return ret;
if (dev->data->nb_rx_queues == 1 &&
- dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
MRVL_LOG(WARNING, "Disabling hash for 1 rx queue");
priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
priv->configured = 1;
@@ -632,7 +632,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
int ret;
if (!priv->ppio) {
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -653,7 +653,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
return ret;
}
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -673,14 +673,14 @@ mrvl_dev_set_link_down(struct rte_eth_dev *dev)
int ret;
if (!priv->ppio) {
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
ret = pp2_ppio_disable(priv->ppio);
if (ret)
return ret;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -902,7 +902,7 @@ mrvl_dev_start(struct rte_eth_dev *dev)
if (dev->data->all_multicast == 1)
mrvl_allmulticast_enable(dev);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
ret = mrvl_populate_vlan_table(dev, 1);
if (ret) {
MRVL_LOG(ERR, "Failed to populate VLAN table");
@@ -938,11 +938,11 @@ mrvl_dev_start(struct rte_eth_dev *dev)
priv->flow_ctrl = 0;
}
- if (dev->data->dev_link.link_status == ETH_LINK_UP) {
+ if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
ret = mrvl_dev_set_link_up(dev);
if (ret) {
MRVL_LOG(ERR, "Failed to set link up");
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
goto out;
}
}
@@ -1211,30 +1211,30 @@ mrvl_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
switch (ethtool_cmd_speed(&edata)) {
case SPEED_10:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case SPEED_100:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case SPEED_1000:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case SPEED_2500:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case SPEED_10000:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
default:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
}
- dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
- dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
- ETH_LINK_FIXED;
+ dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
+ dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+ RTE_ETH_LINK_FIXED;
pp2_ppio_get_link_state(priv->ppio, &link_up);
- dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
return 0;
}
@@ -1718,11 +1718,11 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
{
struct mrvl_priv *priv = dev->data->dev_private;
- info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_10G;
+ info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_10G;
info->max_rx_queues = MRVL_PP2_RXQ_MAX;
info->max_tx_queues = MRVL_PP2_TXQ_MAX;
@@ -1742,9 +1742,9 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
info->tx_offload_capa = MRVL_TX_OFFLOADS;
info->tx_queue_offload_capa = MRVL_TX_OFFLOADS;
- info->flow_type_rss_offloads = ETH_RSS_IPV4 |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_UDP;
+ info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP;
/* By default packets are dropped if no descriptors are available */
info->default_rxconf.rx_drop_en = 1;
@@ -1873,13 +1873,13 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
int ret;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
MRVL_LOG(ERR, "VLAN stripping is not supported\n");
return -ENOTSUP;
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ret = mrvl_populate_vlan_table(dev, 1);
else
ret = mrvl_populate_vlan_table(dev, 0);
@@ -1888,7 +1888,7 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return ret;
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
MRVL_LOG(ERR, "Extend VLAN not supported\n");
return -ENOTSUP;
}
@@ -2033,7 +2033,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
rxq->priv = priv;
rxq->mp = mp;
- rxq->cksum_enabled = offloads & DEV_RX_OFFLOAD_IPV4_CKSUM;
+ rxq->cksum_enabled = offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
rxq->queue_id = idx;
rxq->port_id = dev->data->port_id;
mrvl_port_to_bpool_lookup[rxq->port_id] = priv->bpool;
@@ -2189,7 +2189,7 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
return ret;
}
- fc_conf->mode = en ? RTE_FC_RX_PAUSE : RTE_FC_NONE;
+ fc_conf->mode = en ? RTE_ETH_FC_RX_PAUSE : RTE_ETH_FC_NONE;
ret = pp2_ppio_get_tx_pause(priv->ppio, &en);
if (ret) {
@@ -2198,10 +2198,10 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
if (en) {
- if (fc_conf->mode == RTE_FC_NONE)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ if (fc_conf->mode == RTE_ETH_FC_NONE)
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
}
return 0;
@@ -2247,19 +2247,19 @@ mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
switch (fc_conf->mode) {
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
rx_en = 1;
tx_en = 1;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
rx_en = 0;
tx_en = 1;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
rx_en = 1;
tx_en = 0;
break;
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
rx_en = 0;
tx_en = 0;
break;
@@ -2336,11 +2336,11 @@ mrvl_rss_hash_conf_get(struct rte_eth_dev *dev,
if (hash_type == PP2_PPIO_HASH_T_NONE)
rss_conf->rss_hf = 0;
else if (hash_type == PP2_PPIO_HASH_T_2_TUPLE)
- rss_conf->rss_hf = ETH_RSS_IPV4;
+ rss_conf->rss_hf = RTE_ETH_RSS_IPV4;
else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && priv->rss_hf_tcp)
- rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_TCP;
else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && !priv->rss_hf_tcp)
- rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP;
return 0;
}
@@ -3159,7 +3159,7 @@ mrvl_eth_dev_create(struct rte_vdev_device *vdev, const char *name)
eth_dev->dev_ops = &mrvl_ops;
eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
rte_eth_dev_probing_finish(eth_dev);
return 0;
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9e2a40597349..9c4ae80e7e16 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -40,16 +40,16 @@
#include "hn_nvs.h"
#include "ndis.h"
-#define HN_TX_OFFLOAD_CAPS (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_VLAN_INSERT)
+#define HN_TX_OFFLOAD_CAPS (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
-#define HN_RX_OFFLOAD_CAPS (DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_RSS_HASH)
+#define HN_RX_OFFLOAD_CAPS (RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define NETVSC_ARG_LATENCY "latency"
#define NETVSC_ARG_RXBREAK "rx_copybreak"
@@ -238,21 +238,21 @@ hn_dev_link_update(struct rte_eth_dev *dev,
hn_rndis_get_linkspeed(hv);
link = (struct rte_eth_link) {
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_autoneg = ETH_LINK_SPEED_FIXED,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_autoneg = RTE_ETH_LINK_SPEED_FIXED,
.link_speed = hv->link_speed / 10000,
};
if (hv->link_status == NDIS_MEDIA_STATE_CONNECTED)
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
else
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
if (old.link_status == link.link_status)
return 0;
PMD_INIT_LOG(DEBUG, "Port %d is %s", dev->data->port_id,
- (link.link_status == ETH_LINK_UP) ? "up" : "down");
+ (link.link_status == RTE_ETH_LINK_UP) ? "up" : "down");
return rte_eth_linkstatus_set(dev, &link);
}
@@ -263,14 +263,14 @@ static int hn_dev_info_get(struct rte_eth_dev *dev,
struct hn_data *hv = dev->data->dev_private;
int rc;
- dev_info->speed_capa = ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
dev_info->min_rx_bufsize = HN_MIN_RX_BUF_SIZE;
dev_info->max_rx_pktlen = HN_MAX_XFER_LEN;
dev_info->max_mac_addrs = 1;
dev_info->hash_key_size = NDIS_HASH_KEYSIZE_TOEPLITZ;
dev_info->flow_type_rss_offloads = hv->rss_offloads;
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->max_rx_queues = hv->max_queues;
dev_info->max_tx_queues = hv->max_queues;
@@ -306,8 +306,8 @@ static int hn_rss_reta_update(struct rte_eth_dev *dev,
}
for (i = 0; i < NDIS_HASH_INDCNT; i++) {
- uint16_t idx = i / RTE_RETA_GROUP_SIZE;
- uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+ uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
uint64_t mask = (uint64_t)1 << shift;
if (reta_conf[idx].mask & mask)
@@ -346,8 +346,8 @@ static int hn_rss_reta_query(struct rte_eth_dev *dev,
}
for (i = 0; i < NDIS_HASH_INDCNT; i++) {
- uint16_t idx = i / RTE_RETA_GROUP_SIZE;
- uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+ uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
uint64_t mask = (uint64_t)1 << shift;
if (reta_conf[idx].mask & mask)
@@ -362,17 +362,17 @@ static void hn_rss_hash_init(struct hn_data *hv,
/* Convert from DPDK RSS hash flags to NDIS hash flags */
hv->rss_hash = NDIS_HASH_FUNCTION_TOEPLITZ;
- if (rss_conf->rss_hf & ETH_RSS_IPV4)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
hv->rss_hash |= NDIS_HASH_IPV4;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
hv->rss_hash |= NDIS_HASH_TCP_IPV4;
- if (rss_conf->rss_hf & ETH_RSS_IPV6)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
hv->rss_hash |= NDIS_HASH_IPV6;
- if (rss_conf->rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX)
hv->rss_hash |= NDIS_HASH_IPV6_EX;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
hv->rss_hash |= NDIS_HASH_TCP_IPV6;
- if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
hv->rss_hash |= NDIS_HASH_TCP_IPV6_EX;
memcpy(hv->rss_key, rss_conf->rss_key ? : rss_default_key,
@@ -427,22 +427,22 @@ static int hn_rss_hash_conf_get(struct rte_eth_dev *dev,
rss_conf->rss_hf = 0;
if (hv->rss_hash & NDIS_HASH_IPV4)
- rss_conf->rss_hf |= ETH_RSS_IPV4;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
if (hv->rss_hash & NDIS_HASH_TCP_IPV4)
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (hv->rss_hash & NDIS_HASH_IPV6)
- rss_conf->rss_hf |= ETH_RSS_IPV6;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
if (hv->rss_hash & NDIS_HASH_IPV6_EX)
- rss_conf->rss_hf |= ETH_RSS_IPV6_EX;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (hv->rss_hash & NDIS_HASH_TCP_IPV6)
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (hv->rss_hash & NDIS_HASH_TCP_IPV6_EX)
- rss_conf->rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
return 0;
}
@@ -686,8 +686,8 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev_conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev_conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
unsupported = txmode->offloads & ~HN_TX_OFFLOAD_CAPS;
if (unsupported) {
@@ -705,7 +705,7 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- hv->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ hv->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
err = hn_rndis_conf_offload(hv, txmode->offloads,
rxmode->offloads);
diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c
index e3f7e636d731..cacb30385404 100644
--- a/drivers/net/netvsc/hn_rndis.c
+++ b/drivers/net/netvsc/hn_rndis.c
@@ -710,15 +710,15 @@ hn_rndis_query_rsscaps(struct hn_data *hv,
hv->rss_offloads = 0;
if (caps.ndis_caps & NDIS_RSS_CAP_IPV4)
- hv->rss_offloads |= ETH_RSS_IPV4
- | ETH_RSS_NONFRAG_IPV4_TCP
- | ETH_RSS_NONFRAG_IPV4_UDP;
+ hv->rss_offloads |= RTE_ETH_RSS_IPV4
+ | RTE_ETH_RSS_NONFRAG_IPV4_TCP
+ | RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (caps.ndis_caps & NDIS_RSS_CAP_IPV6)
- hv->rss_offloads |= ETH_RSS_IPV6
- | ETH_RSS_NONFRAG_IPV6_TCP;
+ hv->rss_offloads |= RTE_ETH_RSS_IPV6
+ | RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (caps.ndis_caps & NDIS_RSS_CAP_IPV6_EX)
- hv->rss_offloads |= ETH_RSS_IPV6_EX
- | ETH_RSS_IPV6_TCP_EX;
+ hv->rss_offloads |= RTE_ETH_RSS_IPV6_EX
+ | RTE_ETH_RSS_IPV6_TCP_EX;
/* Commit! */
*rxr_cnt0 = rxr_cnt;
@@ -800,7 +800,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
params.ndis_hdr.ndis_size = NDIS_OFFLOAD_PARAMS_SIZE;
}
- if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_TCP4)
params.ndis_tcp4csum = NDIS_OFFLOAD_PARAM_TX;
else
@@ -812,7 +812,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) {
if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4)
== NDIS_RXCSUM_CAP_TCP4)
params.ndis_tcp4csum |= NDIS_OFFLOAD_PARAM_RX;
@@ -826,7 +826,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4)
params.ndis_udp4csum = NDIS_OFFLOAD_PARAM_TX;
else
@@ -839,7 +839,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (rx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+ if (rx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4)
params.ndis_udp4csum |= NDIS_OFFLOAD_PARAM_RX;
else
@@ -851,21 +851,21 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_IP4)
== NDIS_TXCSUM_CAP_IP4)
params.ndis_ip4csum = NDIS_OFFLOAD_PARAM_TX;
else
goto unsupported;
}
- if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
params.ndis_ip4csum |= NDIS_OFFLOAD_PARAM_RX;
else
goto unsupported;
}
- if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
if (hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023)
params.ndis_lsov2_ip4 = NDIS_OFFLOAD_LSOV2_ON;
else
@@ -907,41 +907,41 @@ int hn_rndis_get_offload(struct hn_data *hv,
return error;
}
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_IP4)
== HN_NDIS_TXCSUM_CAP_IP4)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_TCP4)
== HN_NDIS_TXCSUM_CAP_TCP4 &&
(hwcaps.ndis_csum.ndis_ip6_txcsum & HN_NDIS_TXCSUM_CAP_TCP6)
== HN_NDIS_TXCSUM_CAP_TCP6)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4) &&
(hwcaps.ndis_csum.ndis_ip6_txcsum & NDIS_TXCSUM_CAP_UDP6))
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_UDP_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
if ((hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023) &&
(hwcaps.ndis_lsov2.ndis_ip6_opts & HN_NDIS_LSOV2_CAP_IP6)
== HN_NDIS_LSOV2_CAP_IP6)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_RSS_HASH;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4) &&
(hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_TCP6))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4) &&
(hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_UDP6))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_UDP_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
return 0;
}
diff --git a/drivers/net/nfb/nfb_ethdev.c b/drivers/net/nfb/nfb_ethdev.c
index 7e91d5984740..c2ff1c999869 100644
--- a/drivers/net/nfb/nfb_ethdev.c
+++ b/drivers/net/nfb/nfb_ethdev.c
@@ -200,7 +200,7 @@ nfb_eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_rx_pktlen = (uint32_t)-1;
dev_info->max_rx_queues = dev->data->nb_rx_queues;
dev_info->max_tx_queues = dev->data->nb_tx_queues;
- dev_info->speed_capa = ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -268,26 +268,26 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
status.speed = MAC_SPEED_UNKNOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_status = ETH_LINK_DOWN;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = ETH_LINK_SPEED_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_SPEED_FIXED;
if (internals->rxmac[0] != NULL) {
nc_rxmac_read_status(internals->rxmac[0], &status);
switch (status.speed) {
case MAC_SPEED_10G:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case MAC_SPEED_40G:
- link.link_speed = ETH_SPEED_NUM_40G;
+ link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case MAC_SPEED_100G:
- link.link_speed = ETH_SPEED_NUM_100G;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
}
@@ -296,7 +296,7 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
nc_rxmac_read_status(internals->rxmac[i], &status);
if (status.enabled && status.link_up) {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
break;
}
}
diff --git a/drivers/net/nfb/nfb_rx.c b/drivers/net/nfb/nfb_rx.c
index d6d4ba9663c6..f19e9834848b 100644
--- a/drivers/net/nfb/nfb_rx.c
+++ b/drivers/net/nfb/nfb_rx.c
@@ -42,7 +42,7 @@ nfb_check_timestamp(struct rte_devargs *devargs)
}
/* Timestamps are enabled when there is
* key-value pair: enable_timestamp=1
- * TODO: timestamp should be enabled with DEV_RX_OFFLOAD_TIMESTAMP
+ * TODO: timestamp should be enabled with RTE_ETH_RX_OFFLOAD_TIMESTAMP
*/
if (rte_kvargs_process(kvlist, TIMESTAMP_ARG,
timestamp_check_handler, NULL) < 0) {
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 1b4bc33593fb..dff7cfd3d6f9 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -160,8 +160,8 @@ nfp_net_configure(struct rte_eth_dev *dev)
rxmode = &dev_conf->rxmode;
txmode = &dev_conf->txmode;
- if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
- rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Checking TX mode */
if (txmode->mq_mode) {
@@ -170,7 +170,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
}
/* Checking RX mode */
- if (rxmode->mq_mode & ETH_MQ_RX_RSS &&
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS &&
!(hw->cap & NFP_NET_CFG_CTRL_RSS)) {
PMD_INIT_LOG(INFO, "RSS not supported");
return -EINVAL;
@@ -359,20 +359,20 @@ nfp_check_offloads(struct rte_eth_dev *dev)
rxmode = &dev_conf->rxmode;
txmode = &dev_conf->txmode;
- if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
ctrl |= NFP_NET_CFG_CTRL_RXCSUM;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
hw->mtu = rxmode->max_rx_pkt_len;
- if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
/* L2 broadcast */
@@ -384,13 +384,13 @@ nfp_check_offloads(struct rte_eth_dev *dev)
ctrl |= NFP_NET_CFG_CTRL_L2MC;
/* TX checksum offload */
- if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
/* LSO offload */
- if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
if (hw->cap & NFP_NET_CFG_CTRL_LSO)
ctrl |= NFP_NET_CFG_CTRL_LSO;
else
@@ -398,7 +398,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
}
/* RX gather */
- if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
ctrl |= NFP_NET_CFG_CTRL_GATHER;
return ctrl;
@@ -486,14 +486,14 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
int ret;
static const uint32_t ls_to_ethtool[] = {
- [NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = ETH_SPEED_NUM_NONE,
- [NFP_NET_CFG_STS_LINK_RATE_UNKNOWN] = ETH_SPEED_NUM_NONE,
- [NFP_NET_CFG_STS_LINK_RATE_1G] = ETH_SPEED_NUM_1G,
- [NFP_NET_CFG_STS_LINK_RATE_10G] = ETH_SPEED_NUM_10G,
- [NFP_NET_CFG_STS_LINK_RATE_25G] = ETH_SPEED_NUM_25G,
- [NFP_NET_CFG_STS_LINK_RATE_40G] = ETH_SPEED_NUM_40G,
- [NFP_NET_CFG_STS_LINK_RATE_50G] = ETH_SPEED_NUM_50G,
- [NFP_NET_CFG_STS_LINK_RATE_100G] = ETH_SPEED_NUM_100G,
+ [NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = RTE_ETH_SPEED_NUM_NONE,
+ [NFP_NET_CFG_STS_LINK_RATE_UNKNOWN] = RTE_ETH_SPEED_NUM_NONE,
+ [NFP_NET_CFG_STS_LINK_RATE_1G] = RTE_ETH_SPEED_NUM_1G,
+ [NFP_NET_CFG_STS_LINK_RATE_10G] = RTE_ETH_SPEED_NUM_10G,
+ [NFP_NET_CFG_STS_LINK_RATE_25G] = RTE_ETH_SPEED_NUM_25G,
+ [NFP_NET_CFG_STS_LINK_RATE_40G] = RTE_ETH_SPEED_NUM_40G,
+ [NFP_NET_CFG_STS_LINK_RATE_50G] = RTE_ETH_SPEED_NUM_50G,
+ [NFP_NET_CFG_STS_LINK_RATE_100G] = RTE_ETH_SPEED_NUM_100G,
};
PMD_DRV_LOG(DEBUG, "Link update");
@@ -505,15 +505,15 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
memset(&link, 0, sizeof(struct rte_eth_link));
if (nn_link_status & NFP_NET_CFG_STS_LINK)
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
nn_link_status = (nn_link_status >> NFP_NET_CFG_STS_LINK_RATE_SHIFT) &
NFP_NET_CFG_STS_LINK_RATE_MASK;
if (nn_link_status >= RTE_DIM(ls_to_ethtool))
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
else
link.link_speed = ls_to_ethtool[nn_link_status];
@@ -702,26 +702,26 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mac_addrs = 1;
if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if (hw->cap & NFP_NET_CFG_CTRL_TXCSUM)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
if (hw->cap & NFP_NET_CFG_CTRL_LSO_ANY)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (hw->cap & NFP_NET_CFG_CTRL_GATHER)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -758,25 +758,25 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
};
/* All NFP devices support jumbo frames */
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
- dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_IPV6 |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_UDP;
+ dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP;
dev_info->reta_size = NFP_NET_CFG_RSS_ITBL_SZ;
dev_info->hash_key_size = NFP_NET_CFG_RSS_KEY_SZ;
}
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -847,7 +847,7 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
if (link.link_status)
PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id, link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX
? "full-duplex" : "half-duplex");
else
PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -964,9 +964,9 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if ((uint32_t)mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
/* update max frame size */
dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)mtu;
@@ -990,12 +990,12 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
new_ctrl = 0;
/* Enable vlan strip if it is not configured yet */
- if ((mask & ETH_VLAN_STRIP_OFFLOAD) &&
+ if ((mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
!(hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
new_ctrl = hw->ctrl | NFP_NET_CFG_CTRL_RXVLAN;
/* Disable vlan strip just if it is configured */
- if (!(mask & ETH_VLAN_STRIP_OFFLOAD) &&
+ if (!(mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
(hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_RXVLAN;
@@ -1035,8 +1035,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
*/
for (i = 0; i < reta_size; i += 4) {
/* Handling 4 RSS entries per loop */
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
if (!mask)
@@ -1116,8 +1116,8 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
*/
for (i = 0; i < reta_size; i += 4) {
/* Handling 4 RSS entries per loop */
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
if (!mask)
@@ -1155,22 +1155,22 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
rss_hf = rss_conf->rss_hf;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_UDP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_UDP;
cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK;
@@ -1240,22 +1240,22 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
cfg_rss_ctrl = nn_cfg_readl(hw, NFP_NET_CFG_RSS_CTRL);
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP;
/* Propagate current RSS hash functions to caller */
rss_conf->rss_hf = rss_hf;
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 534a38c14f94..7a6a963bf6cc 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -140,7 +140,7 @@ nfp_net_start(struct rte_eth_dev *dev)
dev_conf = &dev->data->dev_conf;
rxmode = &dev_conf->rxmode;
- if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
nfp_net_rss_config_default(dev);
update |= NFP_NET_CFG_UPDATE_RSS;
new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index b697b55865cc..ac960328c7de 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -101,7 +101,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
dev_conf = &dev->data->dev_conf;
rxmode = &dev_conf->rxmode;
- if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
nfp_net_rss_config_default(dev);
update |= NFP_NET_CFG_UPDATE_RSS;
new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615adfa..fc76b84b5b66 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -409,7 +409,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
dev->data->dev_link.link_status = link_up;
link_speeds = &dev->data->dev_conf.link_speeds;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG)
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG)
negotiate = true;
err = hw->mac.get_link_capabilities(hw, &speed, &negotiate);
@@ -418,11 +418,11 @@ ngbe_dev_start(struct rte_eth_dev *dev)
allowed_speeds = 0;
if (hw->mac.default_speeds & NGBE_LINK_SPEED_1GB_FULL)
- allowed_speeds |= ETH_LINK_SPEED_1G;
+ allowed_speeds |= RTE_ETH_LINK_SPEED_1G;
if (hw->mac.default_speeds & NGBE_LINK_SPEED_100M_FULL)
- allowed_speeds |= ETH_LINK_SPEED_100M;
+ allowed_speeds |= RTE_ETH_LINK_SPEED_100M;
if (hw->mac.default_speeds & NGBE_LINK_SPEED_10M_FULL)
- allowed_speeds |= ETH_LINK_SPEED_10M;
+ allowed_speeds |= RTE_ETH_LINK_SPEED_10M;
if (*link_speeds & ~allowed_speeds) {
PMD_INIT_LOG(ERR, "Invalid link setting");
@@ -430,14 +430,14 @@ ngbe_dev_start(struct rte_eth_dev *dev)
}
speed = 0x0;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
speed = hw->mac.default_speeds;
} else {
- if (*link_speeds & ETH_LINK_SPEED_1G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
speed |= NGBE_LINK_SPEED_1GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_100M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
speed |= NGBE_LINK_SPEED_100M_FULL;
- if (*link_speeds & ETH_LINK_SPEED_10M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
speed |= NGBE_LINK_SPEED_10M_FULL;
}
@@ -653,8 +653,8 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = rx_desc_lim;
dev_info->tx_desc_lim = tx_desc_lim;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_10M;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_10M;
/* Driver-preferred Rx/Tx parameters */
dev_info->default_rxportconf.burst_size = 32;
@@ -682,11 +682,11 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
int wait = 1;
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ~ETH_LINK_SPEED_AUTONEG);
+ ~RTE_ETH_LINK_SPEED_AUTONEG);
hw->mac.get_link_status = true;
@@ -699,8 +699,8 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
if (err != 0) {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -708,27 +708,27 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
return rte_eth_linkstatus_set(dev, &link);
intr->flags &= ~NGBE_FLAG_NEED_LINK_CONFIG;
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (link_speed) {
default:
case NGBE_LINK_SPEED_UNKNOWN:
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
case NGBE_LINK_SPEED_10M_FULL:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
lan_speed = 0;
break;
case NGBE_LINK_SPEED_100M_FULL:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
lan_speed = 1;
break;
case NGBE_LINK_SPEED_1GB_FULL:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
lan_speed = 2;
break;
}
@@ -912,11 +912,11 @@ ngbe_dev_link_status_print(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
- if (link.link_status == ETH_LINK_UP) {
+ if (link.link_status == RTE_ETH_LINK_UP) {
PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -956,7 +956,7 @@ ngbe_dev_interrupt_action(struct rte_eth_dev *dev)
ngbe_dev_link_update(dev, 0);
/* likely to up */
- if (link.link_status != ETH_LINK_UP)
+ if (link.link_status != RTE_ETH_LINK_UP)
/* handle it 1 sec later, wait it being stable */
timeout = NGBE_LINK_UP_CHECK_TIMEOUT;
/* likely to down */
diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
index 508bafc12a14..df4ddb3b40e2 100644
--- a/drivers/net/null/rte_eth_null.c
+++ b/drivers/net/null/rte_eth_null.c
@@ -61,16 +61,16 @@ struct pmd_internals {
rte_spinlock_t rss_lock;
uint16_t reta_size;
- struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_128 /
- RTE_RETA_GROUP_SIZE];
+ struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_128 /
+ RTE_ETH_RETA_GROUP_SIZE];
uint8_t rss_key[40]; /**< 40-byte hash key. */
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(eth_null_logtype, NOTICE);
@@ -189,7 +189,7 @@ eth_dev_start(struct rte_eth_dev *dev)
if (dev == NULL)
return -EINVAL;
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -199,7 +199,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
if (dev == NULL)
return 0;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -381,9 +381,9 @@ eth_rss_reta_update(struct rte_eth_dev *dev,
rte_spinlock_lock(&internal->rss_lock);
/* Copy RETA table */
- for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
+ for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
internal->reta_conf[i].mask = reta_conf[i].mask;
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
internal->reta_conf[i].reta[j] = reta_conf[i].reta[j];
}
@@ -406,8 +406,8 @@ eth_rss_reta_query(struct rte_eth_dev *dev,
rte_spinlock_lock(&internal->rss_lock);
/* Copy RETA table */
- for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = internal->reta_conf[i].reta[j];
}
@@ -538,8 +538,8 @@ eth_dev_null_create(struct rte_vdev_device *dev, struct pmd_options *args)
internals->port_id = eth_dev->data->port_id;
rte_eth_random_addr(internals->eth_addr.addr_bytes);
- internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
- internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_RETA_GROUP_SIZE;
+ internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
+ internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_ETH_RETA_GROUP_SIZE;
rte_memcpy(internals->rss_key, default_rss_key, 40);
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 9f4c0503b4d4..947dabdca2c5 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -158,7 +158,7 @@ octeontx_link_status_print(struct rte_eth_dev *eth_dev,
octeontx_log_info("Port %u: Link Up - speed %u Mbps - %s",
(eth_dev->data->port_id),
link->link_speed,
- link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
else
octeontx_log_info("Port %d: Link Down",
@@ -171,38 +171,38 @@ octeontx_link_status_update(struct octeontx_nic *nic,
{
memset(link, 0, sizeof(*link));
- link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
switch (nic->speed) {
case OCTEONTX_LINK_SPEED_SGMII:
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case OCTEONTX_LINK_SPEED_XAUI:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case OCTEONTX_LINK_SPEED_RXAUI:
case OCTEONTX_LINK_SPEED_10G_R:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case OCTEONTX_LINK_SPEED_QSGMII:
- link->link_speed = ETH_SPEED_NUM_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case OCTEONTX_LINK_SPEED_40G_R:
- link->link_speed = ETH_SPEED_NUM_40G;
+ link->link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case OCTEONTX_LINK_SPEED_RESERVE1:
case OCTEONTX_LINK_SPEED_RESERVE2:
default:
- link->link_speed = ETH_SPEED_NUM_NONE;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
octeontx_log_err("incorrect link speed %d", nic->speed);
break;
}
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
}
static void
@@ -355,20 +355,20 @@ octeontx_tx_offload_flags(struct rte_eth_dev *eth_dev)
struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
uint16_t flags = 0;
- if (nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= OCCTX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (nic->tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= OCCTX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(nic->tx_offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= OCCTX_TX_OFFLOAD_MBUF_NOFF_F;
- if (nic->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= OCCTX_TX_MULTI_SEG_F;
return flags;
@@ -380,21 +380,21 @@ octeontx_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
uint16_t flags = 0;
- if (nic->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM))
+ if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= OCCTX_RX_OFFLOAD_CSUM_F;
- if (nic->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= OCCTX_RX_OFFLOAD_CSUM_F;
- if (nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
flags |= OCCTX_RX_MULTI_SEG_F;
eth_dev->data->scattered_rx = 1;
/* If scatter mode is enabled, TX should also be in multi
* seg mode, else memory leak will occur
*/
- nic->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ nic->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
}
return flags;
@@ -423,18 +423,18 @@ octeontx_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
octeontx_log_err("unsupported rx qmode %d", rxmode->mq_mode);
return -EINVAL;
}
- if (!(txmode->offloads & DEV_TX_OFFLOAD_MT_LOCKFREE)) {
+ if (!(txmode->offloads & RTE_ETH_TX_OFFLOAD_MT_LOCKFREE)) {
PMD_INIT_LOG(NOTICE, "cant disable lockfree tx");
- txmode->offloads |= DEV_TX_OFFLOAD_MT_LOCKFREE;
+ txmode->offloads |= RTE_ETH_TX_OFFLOAD_MT_LOCKFREE;
}
- if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
octeontx_log_err("setting link speed/duplex not supported");
return -EINVAL;
}
@@ -534,13 +534,13 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
* when this feature has not been enabled before.
*/
if (data->dev_started && frame_size > buffsz &&
- !(nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ !(nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
octeontx_log_err("Scatter mode is disabled");
return -EINVAL;
}
/* Check <seg size> * <max_seg> >= max_frame */
- if ((nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ if ((nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
(frame_size > buffsz * OCCTX_RX_NB_SEG_MAX))
return -EINVAL;
@@ -553,9 +553,9 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return rc;
if (frame_size > OCCTX_L2_MAX_LEN)
- nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
- nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ nic->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
/* Update max_rx_pkt_len */
data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -582,7 +582,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
/* Setup scatter mode if needed by jumbo */
if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
- nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+ nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
}
@@ -854,10 +854,10 @@ octeontx_dev_info(struct rte_eth_dev *dev,
struct octeontx_nic *nic = octeontx_pmd_priv(dev);
/* Autonegotiation may be disabled */
- dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
- dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_40G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_40G;
/* Min/Max MTU supported */
dev_info->min_rx_bufsize = OCCTX_MIN_FRS;
@@ -1369,7 +1369,7 @@ octeontx_create(struct rte_vdev_device *dev, int port, uint8_t evdev,
nic->ev_ports = 1;
nic->print_flag = -1;
- data->dev_link.link_status = ETH_LINK_DOWN;
+ data->dev_link.link_status = RTE_ETH_LINK_DOWN;
data->dev_started = 0;
data->promiscuous = 0;
data->all_multicast = 0;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index b73515de37ca..7215039507c3 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -55,24 +55,24 @@
#define OCCTX_MAX_MTU (OCCTX_MAX_FRS - OCCTX_L2_OVERHEAD)
#define OCTEONTX_RX_OFFLOADS ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
#define OCTEONTX_TX_OFFLOADS ( \
- DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
- DEV_TX_OFFLOAD_MT_LOCKFREE | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
static inline struct octeontx_nic *
octeontx_pmd_priv(struct rte_eth_dev *dev)
diff --git a/drivers/net/octeontx/octeontx_ethdev_ops.c b/drivers/net/octeontx/octeontx_ethdev_ops.c
index dbe13ce3826b..6ec2b71b0672 100644
--- a/drivers/net/octeontx/octeontx_ethdev_ops.c
+++ b/drivers/net/octeontx/octeontx_ethdev_ops.c
@@ -43,20 +43,20 @@ octeontx_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
rc = octeontx_vlan_hw_filter(nic, true);
if (rc)
goto done;
- nic->rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
nic->rx_offload_flags |= OCCTX_RX_VLAN_FLTR_F;
} else {
rc = octeontx_vlan_hw_filter(nic, false);
if (rc)
goto done;
- nic->rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ nic->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
nic->rx_offload_flags &= ~OCCTX_RX_VLAN_FLTR_F;
}
}
@@ -139,7 +139,7 @@ octeontx_dev_vlan_offload_init(struct rte_eth_dev *dev)
TAILQ_INIT(&nic->vlan_info.fltr_tbl);
- rc = octeontx_dev_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+ rc = octeontx_dev_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
if (rc)
octeontx_log_err("Failed to set vlan offload rc=%d", rc);
@@ -219,13 +219,13 @@ octeontx_dev_flow_ctrl_get(struct rte_eth_dev *dev,
return rc;
if (conf.rx_pause && conf.tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (conf.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (conf.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
/* low_water & high_water values are in Bytes */
fc_conf->low_water = conf.low_water;
@@ -272,10 +272,10 @@ octeontx_dev_flow_ctrl_set(struct rte_eth_dev *dev,
return -EINVAL;
}
- rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_TX_PAUSE);
+ rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
conf.high_water = fc_conf->high_water;
conf.low_water = fc_conf->low_water;
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 75d4cabf2e7c..ebe503438144 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -21,7 +21,7 @@ nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
if (otx2_dev_is_vf(dev) ||
dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG)
- capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
return capa;
}
@@ -33,10 +33,10 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
/* TSO not supported for earlier chip revisions */
if (otx2_dev_is_96xx_A0(dev) || otx2_dev_is_95xx_Ax(dev))
- capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO);
+ capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
return capa;
}
@@ -66,8 +66,8 @@ nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
req->npa_func = otx2_npa_pf_func_get();
req->sso_func = otx2_sso_pf_func_get();
req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
}
@@ -373,7 +373,7 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
aq->rq.sso_ena = 0;
- if (rxq->offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
aq->rq.ipsech_ena = 1;
aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
@@ -664,7 +664,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
* These are needed in deriving raw clock value from tsc counter.
* read_clock eth op returns raw clock value.
*/
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
otx2_ethdev_is_ptp_en(dev)) {
rc = otx2_nix_raw_clock_tsc_conv(dev);
if (rc) {
@@ -691,7 +691,7 @@ nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
* Maximum three segments can be supported with W8, Choose
* NIX_MAXSQESZ_W16 for multi segment offload.
*/
- if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
return NIX_MAXSQESZ_W16;
else
return NIX_MAXSQESZ_W8;
@@ -706,29 +706,29 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct rte_eth_rxmode *rxmode = &conf->rxmode;
uint16_t flags = 0;
- if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
- (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+ (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
flags |= NIX_RX_OFFLOAD_RSS_F;
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM))
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP))
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP))
flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
flags |= NIX_RX_OFFLOAD_SECURITY_F;
if (!dev->ptype_disable)
@@ -767,43 +767,43 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
- if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
- conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+ conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
- if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
- conf & DEV_TX_OFFLOAD_UDP_CKSUM ||
- conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
- if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= NIX_TX_MULTI_SEG_F;
/* Enable Inner checksum for TSO */
- if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+ if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
flags |= (NIX_TX_OFFLOAD_TSO_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
/* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
flags |= (NIX_TX_OFFLOAD_TSO_F |
NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
- if (conf & DEV_TX_OFFLOAD_SECURITY)
+ if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
flags |= NIX_TX_OFFLOAD_SECURITY_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_TX_OFFLOAD_TSTAMP_F;
return flags;
@@ -913,8 +913,8 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
- dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
- dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+ dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Setting up the rx[tx]_offload_flags due to change
* in rx[tx]_offloads.
@@ -1857,21 +1857,21 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto fail_configure;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
goto fail_configure;
}
- if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
goto fail_configure;
}
if (otx2_dev_is_Ax(dev) &&
- (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
- ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
- (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+ ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
otx2_err("Outer IP and SCTP checksum unsupported");
goto fail_configure;
}
@@ -2244,7 +2244,7 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
* enabled in PF owning this VF
*/
memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
otx2_ethdev_is_ptp_en(dev))
otx2_nix_timesync_enable(eth_dev);
else
@@ -2573,8 +2573,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
rc = otx2_eth_sec_ctx_create(eth_dev);
if (rc)
goto free_mac_addrs;
- dev->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
- dev->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY;
+ dev->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
+ dev->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SECURITY;
/* Initialize rte-flow */
rc = otx2_flow_init(dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7871e3d30bda..04e43b63c192 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -117,44 +117,44 @@
#define CQ_TIMER_THRESH_DEFAULT 0xAULL /* ~1usec i.e (0xA * 100nsec) */
#define CQ_TIMER_THRESH_MAX 255
-#define NIX_RSS_L3_L4_SRC_DST (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY \
- | ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define NIX_RSS_L3_L4_SRC_DST (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY \
+ | RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
-#define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
- ETH_RSS_TCP | ETH_RSS_SCTP | \
- ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD | \
- NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | \
- ETH_RSS_C_VLAN)
+#define NIX_RSS_OFFLOAD (RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |\
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | \
+ RTE_ETH_RSS_TUNNEL | RTE_ETH_RSS_L2_PAYLOAD | \
+ NIX_RSS_L3_L4_SRC_DST | RTE_ETH_RSS_LEVEL_MASK | \
+ RTE_ETH_RSS_C_VLAN)
#define NIX_TX_OFFLOAD_CAPA ( \
- DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
- DEV_TX_OFFLOAD_MT_LOCKFREE | \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_IPV4_CKSUM)
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
#define NIX_RX_OFFLOAD_CAPA ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_QINQ_STRIP | \
- DEV_RX_OFFLOAD_TIMESTAMP | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP | \
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define NIX_DEFAULT_RSS_CTX_GROUP 0
#define NIX_DEFAULT_RSS_MCAM_IDX -1
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index 83f905315b38..60bf6c3f5f05 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -49,12 +49,12 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
val = atoi(value);
- if (val <= ETH_RSS_RETA_SIZE_64)
- val = ETH_RSS_RETA_SIZE_64;
- else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
- val = ETH_RSS_RETA_SIZE_128;
- else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
- val = ETH_RSS_RETA_SIZE_256;
+ if (val <= RTE_ETH_RSS_RETA_SIZE_64)
+ val = RTE_ETH_RSS_RETA_SIZE_64;
+ else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
+ val = RTE_ETH_RSS_RETA_SIZE_128;
+ else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
+ val = RTE_ETH_RSS_RETA_SIZE_256;
else
val = NIX_RSS_RETA_SIZE;
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 5a4501208e9e..41761085e156 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -29,11 +29,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
* when this feature has not been enabled before.
*/
if (data->dev_started && frame_size > buffsz &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER))
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER))
return -EINVAL;
/* Check <seg size> * <max_seg> >= max_frame */
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
(frame_size > buffsz * NIX_RX_NB_SEG_MAX))
return -EINVAL;
@@ -59,9 +59,9 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return rc;
if (frame_size > NIX_L2_MAX_LEN)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
/* Update max_rx_pkt_len */
data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -590,17 +590,17 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
};
/* Auto negotiation disabled */
- devinfo->speed_capa = ETH_LINK_SPEED_FIXED;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
if (!otx2_dev_is_vf_or_sdp(dev) && !otx2_dev_is_lbk(dev)) {
- devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G;
+ devinfo->speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G;
/* 50G and 100G to be supported for board version C0
* and above.
*/
if (!otx2_dev_is_Ax(dev))
- devinfo->speed_capa |= ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ devinfo->speed_capa |= RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
}
devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.c b/drivers/net/octeontx2/otx2_ethdev_sec.c
index c2a36883cbf2..e1654ef5b284 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec.c
+++ b/drivers/net/octeontx2/otx2_ethdev_sec.c
@@ -890,8 +890,8 @@ otx2_eth_sec_init(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(sa_width < 32 || sa_width > 512 ||
!RTE_IS_POWER_OF_2(sa_width));
- if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+ if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
return 0;
if (rte_security_dynfield_register() < 0)
@@ -933,8 +933,8 @@ otx2_eth_sec_fini(struct rte_eth_dev *eth_dev)
uint16_t port = eth_dev->data->port_id;
char name[RTE_MEMZONE_NAMESIZE];
- if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+ if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
return;
lookup_mem_sa_tbl_clear(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 6df0732189eb..1d0fe4e950d4 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -625,7 +625,7 @@ otx2_flow_create(struct rte_eth_dev *dev,
goto err_exit;
}
- if (hw->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (hw->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
rc = flow_update_sec_tt(dev, actions);
if (rc != 0) {
rte_flow_error_set(error, EIO,
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
index 76bf48100183..071740de86a7 100644
--- a/drivers/net/octeontx2/otx2_flow_ctrl.c
+++ b/drivers/net/octeontx2/otx2_flow_ctrl.c
@@ -54,7 +54,7 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
int rc;
if (otx2_dev_is_lbk(dev)) {
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -66,13 +66,13 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
goto done;
if (rsp->rx_pause && rsp->tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rsp->rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (rsp->tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
done:
return rc;
@@ -159,10 +159,10 @@ otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
if (fc_conf->mode == fc->mode)
return 0;
- rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_TX_PAUSE);
+ rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
/* Check if TX pause frame is already enabled or not */
if (fc->tx_pause ^ tx_pause) {
@@ -212,11 +212,11 @@ otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
if (otx2_dev_is_Ax(dev) &&
(dev->npc_flow.switch_header_type != OTX2_PRIV_FLAGS_HIGIG) &&
- (fc_conf.mode == RTE_FC_FULL || fc_conf.mode == RTE_FC_RX_PAUSE)) {
+ (fc_conf.mode == RTE_ETH_FC_FULL || fc_conf.mode == RTE_ETH_FC_RX_PAUSE)) {
fc_conf.mode =
- (fc_conf.mode == RTE_FC_FULL ||
- fc_conf.mode == RTE_FC_TX_PAUSE) ?
- RTE_FC_TX_PAUSE : RTE_FC_NONE;
+ (fc_conf.mode == RTE_ETH_FC_FULL ||
+ fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ?
+ RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
}
return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
@@ -234,7 +234,7 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
return 0;
memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
- /* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+ /* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
* by AF driver, update those info in PMD structure.
*/
rc = otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -242,10 +242,10 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
goto exit;
fc->mode = fc_conf.mode;
- fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_RX_PAUSE);
- fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_TX_PAUSE);
+ fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+ fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
exit:
return rc;
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index 63a33142a579..3fe6727f1d2a 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -852,7 +852,7 @@ parse_rss_action(struct rte_eth_dev *dev,
attr, "No support of RSS in egress");
}
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS)
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION,
act, "multi-queue mode is disabled");
@@ -1188,7 +1188,7 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
*FLOW_KEY_ALG index. So, till we update the action with
*flow_key_alg index, set the action to drop.
*/
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
flow->npc_action = NIX_RX_ACTIONOP_DROP;
else
flow->npc_action = NIX_RX_ACTIONOP_UCAST;
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
index 81dd6243b977..8f5d0eed92b6 100644
--- a/drivers/net/octeontx2/otx2_link.c
+++ b/drivers/net/octeontx2/otx2_link.c
@@ -41,7 +41,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
otx2_info("Port %d: Link Up - speed %u Mbps - %s",
(int)(eth_dev->data->port_id),
(uint32_t)link->link_speed,
- link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
else
otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
@@ -92,7 +92,7 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
eth_link.link_status = link->link_up;
eth_link.link_speed = link->speed;
- eth_link.link_autoneg = ETH_LINK_AUTONEG;
+ eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
eth_link.link_duplex = link->full_duplex;
otx2_dev->speed = link->speed;
@@ -111,10 +111,10 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
static int
lbk_link_update(struct rte_eth_link *link)
{
- link->link_status = ETH_LINK_UP;
- link->link_speed = ETH_SPEED_NUM_100G;
- link->link_autoneg = ETH_LINK_FIXED;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_status = RTE_ETH_LINK_UP;
+ link->link_speed = RTE_ETH_SPEED_NUM_100G;
+ link->link_autoneg = RTE_ETH_LINK_FIXED;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return 0;
}
@@ -131,7 +131,7 @@ cgx_link_update(struct otx2_eth_dev *dev, struct rte_eth_link *link)
link->link_status = rsp->link_info.link_up;
link->link_speed = rsp->link_info.speed;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
if (rsp->link_info.full_duplex)
link->link_duplex = rsp->link_info.full_duplex;
@@ -233,22 +233,22 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
/* 50G and 100G to be supported for board version C0 and above */
if (!otx2_dev_is_Ax(dev)) {
- if (link_speeds & ETH_LINK_SPEED_100G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100G)
link_speed = 100000;
- if (link_speeds & ETH_LINK_SPEED_50G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_50G)
link_speed = 50000;
}
- if (link_speeds & ETH_LINK_SPEED_40G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_40G)
link_speed = 40000;
- if (link_speeds & ETH_LINK_SPEED_25G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_25G)
link_speed = 25000;
- if (link_speeds & ETH_LINK_SPEED_20G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_20G)
link_speed = 20000;
- if (link_speeds & ETH_LINK_SPEED_10G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
link_speed = 10000;
- if (link_speeds & ETH_LINK_SPEED_5G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_5G)
link_speed = 5000;
- if (link_speeds & ETH_LINK_SPEED_1G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
link_speed = 1000;
return link_speed;
@@ -257,11 +257,11 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
static inline uint8_t
nix_parse_eth_link_duplex(uint32_t link_speeds)
{
- if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
- (link_speeds & ETH_LINK_SPEED_100M_HD))
- return ETH_LINK_HALF_DUPLEX;
+ if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+ (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+ return RTE_ETH_LINK_HALF_DUPLEX;
else
- return ETH_LINK_FULL_DUPLEX;
+ return RTE_ETH_LINK_FULL_DUPLEX;
}
int
@@ -279,7 +279,7 @@ otx2_apply_link_speed(struct rte_eth_dev *eth_dev)
cfg.speed = nix_parse_link_speeds(dev, conf->link_speeds);
if (cfg.speed != SPEED_NONE && cfg.speed != dev->speed) {
cfg.duplex = nix_parse_eth_link_duplex(conf->link_speeds);
- cfg.an = (conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+ cfg.an = (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
return cgx_change_mode(dev, &cfg);
}
diff --git a/drivers/net/octeontx2/otx2_mcast.c b/drivers/net/octeontx2/otx2_mcast.c
index f84aa1bf570c..b9c63ad3bc21 100644
--- a/drivers/net/octeontx2/otx2_mcast.c
+++ b/drivers/net/octeontx2/otx2_mcast.c
@@ -100,7 +100,7 @@ nix_hw_update_mc_addr_list(struct rte_eth_dev *eth_dev)
action = NIX_RX_ACTIONOP_UCAST;
- if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
action = NIX_RX_ACTIONOP_RSS;
action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
}
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
index 91e5c0f6bd11..abb213058792 100644
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -250,7 +250,7 @@ otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
/* System time should be already on by default */
nix_start_timecounters(eth_dev);
- dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
@@ -287,7 +287,7 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
return -EINVAL;
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
--git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
index 7dbe5f69ae65..68cef1caa394 100644
--- a/drivers/net/octeontx2/otx2_rss.c
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -85,8 +85,8 @@ otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
}
/* Copy RETA table */
- for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
if ((reta_conf[i].mask >> j) & 0x01)
rss->ind_tbl[idx] = reta_conf[i].reta[j];
idx++;
@@ -118,8 +118,8 @@ otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
}
/* Copy RETA table */
- for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = rss->ind_tbl[j];
}
@@ -178,23 +178,23 @@ rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
}
#define RSS_IPV4_ENABLE ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
#define RSS_IPV6_ENABLE ( \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
#define RSS_IPV6_EX_ENABLE ( \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define RSS_MAX_LEVELS 3
@@ -233,24 +233,24 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
dev->rss_info.nix_rss = ethdev_rss;
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
}
- if (ethdev_rss & ETH_RSS_C_VLAN)
+ if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
- if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
- if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
- if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
- if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -259,34 +259,34 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
if (ethdev_rss & RSS_IPV6_ENABLE)
flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
- if (ethdev_rss & ETH_RSS_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_TCP)
flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
- if (ethdev_rss & ETH_RSS_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_UDP)
flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
- if (ethdev_rss & ETH_RSS_SCTP)
+ if (ethdev_rss & RTE_ETH_RSS_SCTP)
flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
if (ethdev_rss & RSS_IPV6_EX_ENABLE)
flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
- if (ethdev_rss & ETH_RSS_PORT)
+ if (ethdev_rss & RTE_ETH_RSS_PORT)
flowkey_cfg |= FLOW_KEY_TYPE_PORT;
- if (ethdev_rss & ETH_RSS_NVGRE)
+ if (ethdev_rss & RTE_ETH_RSS_NVGRE)
flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
- if (ethdev_rss & ETH_RSS_VXLAN)
+ if (ethdev_rss & RTE_ETH_RSS_VXLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
- if (ethdev_rss & ETH_RSS_GENEVE)
+ if (ethdev_rss & RTE_ETH_RSS_GENEVE)
flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
- if (ethdev_rss & ETH_RSS_GTPU)
+ if (ethdev_rss & RTE_ETH_RSS_GTPU)
flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
return flowkey_cfg;
@@ -343,7 +343,7 @@ otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
otx2_nix_rss_set_key(dev, rss_conf->rss_key,
(uint32_t)rss_conf->rss_key_len);
- rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
flowkey_cfg =
@@ -390,7 +390,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
int rc;
/* Skip further configuration if selected mode is not RSS */
- if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS || !qcnt)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS || !qcnt)
return 0;
/* Update default RSS key and cfg */
@@ -408,7 +408,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
}
rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, rss_hash_level);
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index ffeade5952dc..986902287b67 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -414,12 +414,12 @@ NIX_RX_FASTPATH_MODES
/* For PTP enabled, scalar rx function should be chosen as most of the
* PTP apps are implemented to rx burst 1 pkt.
*/
- if (dev->scalar_ena || dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ if (dev->scalar_ena || dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
pick_rx_func(eth_dev, nix_eth_rx_burst);
else
pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
/* Copy multi seg version with no offload for tear down sequence */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ff299f00b913..c60190074926 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -1070,7 +1070,7 @@ NIX_TX_FASTPATH_MODES
else
pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
rte_mb();
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index f5161e17a16d..cce643b7b51d 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -50,7 +50,7 @@ nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
action = NIX_RX_ACTIONOP_UCAST;
- if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
action = NIX_RX_ACTIONOP_RSS;
action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
}
@@ -99,7 +99,7 @@ nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
* Take offset from LA since in case of untagged packet,
* lbptr is zero.
*/
- if (type == ETH_VLAN_TYPE_OUTER) {
+ if (type == RTE_ETH_VLAN_TYPE_OUTER) {
vtag_action.act.vtag0_def = vtag_index;
vtag_action.act.vtag0_lid = NPC_LID_LA;
vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
@@ -413,7 +413,7 @@ nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
if (vlan->strip_on ||
(vlan->qinq_on && !vlan->qinq_before_def)) {
if (eth_dev->data->dev_conf.rxmode.mq_mode ==
- ETH_MQ_RX_RSS)
+ RTE_ETH_MQ_RX_RSS)
vlan->def_rx_mcam_ent.action |=
NIX_RX_ACTIONOP_RSS;
else
@@ -717,48 +717,48 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
rxmode = ð_dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
rc = nix_vlan_hw_strip(eth_dev, true);
} else {
- offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
rc = nix_vlan_hw_strip(eth_dev, false);
}
if (rc)
goto done;
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
- offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
rc = nix_vlan_hw_filter(eth_dev, true, 0);
} else {
- offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
rc = nix_vlan_hw_filter(eth_dev, false, 0);
}
if (rc)
goto done;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
if (!dev->vlan_info.qinq_on) {
- offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
rc = otx2_nix_config_double_vlan(eth_dev, true);
if (rc)
goto done;
}
} else {
if (dev->vlan_info.qinq_on) {
- offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+ offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
rc = otx2_nix_config_double_vlan(eth_dev, false);
if (rc)
goto done;
}
}
- if (offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP)) {
+ if (offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP)) {
dev->rx_offloads |= offloads;
dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
otx2_eth_set_rx_function(eth_dev);
@@ -780,7 +780,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
tpid_cfg->tpid = tpid;
- if (type == ETH_VLAN_TYPE_OUTER)
+ if (type == RTE_ETH_VLAN_TYPE_OUTER)
tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
else
tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
@@ -789,7 +789,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
if (rc)
return rc;
- if (type == ETH_VLAN_TYPE_OUTER)
+ if (type == RTE_ETH_VLAN_TYPE_OUTER)
dev->vlan_info.outer_vlan_tpid = tpid;
else
dev->vlan_info.inner_vlan_tpid = tpid;
@@ -864,7 +864,7 @@ otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
vlan->outer_vlan_idx = 0;
}
- rc = nix_vlan_handle_default_tx_entry(dev, ETH_VLAN_TYPE_OUTER,
+ rc = nix_vlan_handle_default_tx_entry(dev, RTE_ETH_VLAN_TYPE_OUTER,
vtag_index, on);
if (rc < 0) {
printf("Default tx entry failed with rc %d\n", rc);
@@ -986,12 +986,12 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
} else {
/* Reinstall all mcam entries now if filter offload is set */
if (eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
nix_vlan_reinstall_vlan_filters(eth_dev);
}
mask =
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
rc = otx2_nix_vlan_offload_set(eth_dev, mask);
if (rc) {
otx2_err("Failed to set vlan offload rc=%d", rc);
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index a243683d61d3..7bfa6098e230 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -33,15 +33,15 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
otx_epvf = OTX_EP_DEV(eth_dev);
- devinfo->speed_capa = ETH_LINK_SPEED_10G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
devinfo->max_rx_queues = otx_epvf->max_rx_queues;
devinfo->max_tx_queues = otx_epvf->max_tx_queues;
devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
- devinfo->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
- devinfo->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
- devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+ devinfo->rx_offload_capa = RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+ devinfo->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SCATTER;
+ devinfo->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index a7d433547e36..77593111f141 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -563,7 +563,7 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
struct otx_ep_buf_free_info *finfo;
int j, frags, num_sg;
- if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+ if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
goto xmit_fail;
finfo = (struct otx_ep_buf_free_info *)rte_malloc(NULL,
@@ -697,7 +697,7 @@ otx2_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
struct otx_ep_buf_free_info *finfo;
int j, frags, num_sg;
- if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+ if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
goto xmit_fail;
finfo = (struct otx_ep_buf_free_info *)
@@ -954,13 +954,13 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
droq_pkt->l4_len = hdr_lens.l4_len;
if ((droq_pkt->pkt_len > (RTE_ETHER_MAX_LEN + OTX_CUST_DATA_LEN)) &&
- !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)) {
+ !(otx_ep->rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)) {
rte_pktmbuf_free(droq_pkt);
goto oq_read_fail;
}
if (droq_pkt->nb_segs > 1 &&
- !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ !(otx_ep->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
rte_pktmbuf_free(droq_pkt);
goto oq_read_fail;
}
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index a8774b7a432a..13d18e875444 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -135,10 +135,10 @@ static const char *valid_arguments[] = {
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(eth_pcap_logtype, NOTICE);
@@ -655,7 +655,7 @@ eth_dev_start(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++)
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -710,7 +710,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++)
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index feec4d10a26e..a74f27bf8158 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -22,15 +22,15 @@ struct pfe_vdev_init_params {
static struct pfe *g_pfe;
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
/* Supported Tx offloads */
static uint64_t dev_tx_offloads_sup =
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
/* TODO: make pfe_svr a runtime option.
* Driver should be able to get the SVR
@@ -613,9 +613,9 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
}
link.link_status = lstatus;
- link.link_speed = ETH_LINK_SPEED_1G;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_speed = RTE_ETH_LINK_SPEED_1G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
pfe_eth_atomic_write_link_status(dev, &link);
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 6667c2d7ab6d..511742c6a1b3 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -65,8 +65,8 @@ typedef u32 offsize_t; /* In DWORDS !!! */
struct eth_phy_cfg {
/* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */
u32 speed;
-#define ETH_SPEED_AUTONEG 0
-#define ETH_SPEED_SMARTLINQ 0x8 /* deprecated - use link_modes field instead */
+#define RTE_ETH_SPEED_AUTONEG 0
+#define RTE_ETH_SPEED_SMARTLINQ 0x8 /* deprecated - use link_modes field instead */
u32 pause; /* bitmask */
#define ETH_PAUSE_NONE 0x0
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 323d46e6ebb2..81c35358dc57 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -342,9 +342,9 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev, bool is_dummy)
}
use_tx_offload = !!(tx_offloads &
- (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
- DEV_TX_OFFLOAD_TCP_TSO | /* tso */
- DEV_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
+ (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | /* tso */
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
if (use_tx_offload) {
DP_INFO(edev, "Assigning qede_xmit_pkts\n");
@@ -1002,16 +1002,16 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
(void)qede_vlan_stripping(eth_dev, 1);
else
(void)qede_vlan_stripping(eth_dev, 0);
}
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* VLAN filtering kicks in when a VLAN is added */
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
qede_vlan_filter_set(eth_dev, 0, 1);
} else {
if (qdev->configured_vlans > 1) { /* Excluding VLAN0 */
@@ -1022,7 +1022,7 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
* enabled
*/
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
} else {
qede_vlan_filter_set(eth_dev, 0, 0);
}
@@ -1069,11 +1069,11 @@ int qede_config_rss(struct rte_eth_dev *eth_dev)
/* Configure default RETA */
memset(reta_conf, 0, sizeof(reta_conf));
for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++)
- reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+ reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
- id = i / RTE_RETA_GROUP_SIZE;
- pos = i % RTE_RETA_GROUP_SIZE;
+ id = i / RTE_ETH_RETA_GROUP_SIZE;
+ pos = i % RTE_ETH_RETA_GROUP_SIZE;
q = i % QEDE_RSS_COUNT(eth_dev);
reta_conf[id].reta[pos] = q;
}
@@ -1112,12 +1112,12 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
}
/* Configure TPA parameters */
- if (rxmode->offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
if (qede_enable_tpa(eth_dev, true))
return -EINVAL;
/* Enable scatter mode for LRO */
if (!eth_dev->data->scattered_rx)
- rxmode->offloads |= DEV_RX_OFFLOAD_SCATTER;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
}
/* Start queues */
@@ -1132,7 +1132,7 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
* Also, we would like to retain similar behavior in PF case, so we
* don't do PF/VF specific check here.
*/
- if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
if (qede_config_rss(eth_dev))
goto err;
@@ -1272,8 +1272,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
PMD_INIT_FUNC_TRACE(edev);
- if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
- rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* We need to have min 1 RX queue.There is no min check in
* rte_eth_dev_configure(), so we are checking it here.
@@ -1291,8 +1291,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
DP_NOTICE(edev, false,
"Invalid devargs supplied, requested change will not take effect\n");
- if (!(rxmode->mq_mode == ETH_MQ_RX_NONE ||
- rxmode->mq_mode == ETH_MQ_RX_RSS)) {
+ if (!(rxmode->mq_mode == RTE_ETH_MQ_RX_NONE ||
+ rxmode->mq_mode == RTE_ETH_MQ_RX_RSS)) {
DP_ERR(edev, "Unsupported multi-queue mode\n");
return -ENOTSUP;
}
@@ -1313,12 +1313,12 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
}
/* If jumbo enabled adjust MTU */
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
eth_dev->data->mtu =
eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
RTE_ETHER_HDR_LEN - QEDE_ETH_OVERHEAD;
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
eth_dev->data->scattered_rx = 1;
if (qede_start_vport(qdev, eth_dev->data->mtu))
@@ -1327,8 +1327,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
qdev->mtu = eth_dev->data->mtu;
/* Enable VLAN offloads by default */
- ret = qede_vlan_offload_set(eth_dev, ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK);
+ ret = qede_vlan_offload_set(eth_dev, RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK);
if (ret)
return ret;
@@ -1391,35 +1391,35 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
dev_info->reta_size = ECORE_RSS_IND_TABLE_SIZE;
dev_info->hash_key_size = ECORE_RSS_KEY_SIZE * sizeof(uint32_t);
dev_info->flow_type_rss_offloads = (uint64_t)QEDE_RSS_OFFLOAD_ALL;
- dev_info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_RSS_HASH);
+ dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
dev_info->rx_queue_offload_capa = 0;
/* TX offloads are on a per-packet basis, so it is applicable
* to both at port and queue levels.
*/
- dev_info->tx_offload_capa = (DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO);
+ dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO);
dev_info->tx_queue_offload_capa = dev_info->tx_offload_capa;
dev_info->default_txconf = (struct rte_eth_txconf) {
- .offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+ .offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
};
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1431,17 +1431,17 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
memset(&link, 0, sizeof(struct qed_link_output));
qdev->ops->common->get_link(edev, &link);
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G)
- speed_cap |= ETH_LINK_SPEED_1G;
+ speed_cap |= RTE_ETH_LINK_SPEED_1G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G)
- speed_cap |= ETH_LINK_SPEED_10G;
+ speed_cap |= RTE_ETH_LINK_SPEED_10G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G)
- speed_cap |= ETH_LINK_SPEED_25G;
+ speed_cap |= RTE_ETH_LINK_SPEED_25G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G)
- speed_cap |= ETH_LINK_SPEED_40G;
+ speed_cap |= RTE_ETH_LINK_SPEED_40G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G)
- speed_cap |= ETH_LINK_SPEED_50G;
+ speed_cap |= RTE_ETH_LINK_SPEED_50G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G)
- speed_cap |= ETH_LINK_SPEED_100G;
+ speed_cap |= RTE_ETH_LINK_SPEED_100G;
dev_info->speed_capa = speed_cap;
return 0;
@@ -1468,10 +1468,10 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
/* Link Mode */
switch (q_link.duplex) {
case QEDE_DUPLEX_HALF:
- link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case QEDE_DUPLEX_FULL:
- link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case QEDE_DUPLEX_UNKNOWN:
default:
@@ -1480,11 +1480,11 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
link.link_duplex = link_duplex;
/* Link Status */
- link.link_status = q_link.link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ link.link_status = q_link.link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
/* AN */
link.link_autoneg = (q_link.supported_caps & QEDE_SUPPORTED_AUTONEG) ?
- ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+ RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
DP_INFO(edev, "Link - Speed %u Mode %u AN %u Status %u\n",
link.link_speed, link.link_duplex,
@@ -2019,12 +2019,12 @@ static int qede_flow_ctrl_set(struct rte_eth_dev *eth_dev,
}
/* Pause is assumed to be supported (SUPPORTED_Pause) */
- if (fc_conf->mode == RTE_FC_FULL)
+ if (fc_conf->mode == RTE_ETH_FC_FULL)
params.pause_config |= (QED_LINK_PAUSE_TX_ENABLE |
QED_LINK_PAUSE_RX_ENABLE);
- if (fc_conf->mode == RTE_FC_TX_PAUSE)
+ if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
params.pause_config |= QED_LINK_PAUSE_TX_ENABLE;
- if (fc_conf->mode == RTE_FC_RX_PAUSE)
+ if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
params.pause_config |= QED_LINK_PAUSE_RX_ENABLE;
params.link_up = true;
@@ -2048,13 +2048,13 @@ static int qede_flow_ctrl_get(struct rte_eth_dev *eth_dev,
if (current_link.pause_config & (QED_LINK_PAUSE_RX_ENABLE |
QED_LINK_PAUSE_TX_ENABLE))
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (current_link.pause_config & QED_LINK_PAUSE_RX_ENABLE)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (current_link.pause_config & QED_LINK_PAUSE_TX_ENABLE)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -2095,14 +2095,14 @@ qede_dev_supported_ptypes_get(struct rte_eth_dev *eth_dev)
static void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf)
{
*rss_caps = 0;
- *rss_caps |= (hf & ETH_RSS_IPV4) ? ECORE_RSS_IPV4 : 0;
- *rss_caps |= (hf & ETH_RSS_IPV6) ? ECORE_RSS_IPV6 : 0;
- *rss_caps |= (hf & ETH_RSS_IPV6_EX) ? ECORE_RSS_IPV6 : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? ECORE_RSS_IPV4_TCP : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? ECORE_RSS_IPV6_TCP : 0;
- *rss_caps |= (hf & ETH_RSS_IPV6_TCP_EX) ? ECORE_RSS_IPV6_TCP : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_UDP) ? ECORE_RSS_IPV4_UDP : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_UDP) ? ECORE_RSS_IPV6_UDP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV4) ? ECORE_RSS_IPV4 : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV6) ? ECORE_RSS_IPV6 : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV6_EX) ? ECORE_RSS_IPV6 : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? ECORE_RSS_IPV4_TCP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? ECORE_RSS_IPV6_TCP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV6_TCP_EX) ? ECORE_RSS_IPV6_TCP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? ECORE_RSS_IPV4_UDP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? ECORE_RSS_IPV6_UDP : 0;
}
int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
@@ -2228,7 +2228,7 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
uint8_t entry;
int rc = 0;
- if (reta_size > ETH_RSS_RETA_SIZE_128) {
+ if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
DP_ERR(edev, "reta_size %d is not supported by hardware\n",
reta_size);
return -EINVAL;
@@ -2252,8 +2252,8 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
for_each_hwfn(edev, i) {
for (j = 0; j < reta_size; j++) {
- idx = j / RTE_RETA_GROUP_SIZE;
- shift = j % RTE_RETA_GROUP_SIZE;
+ idx = j / RTE_ETH_RETA_GROUP_SIZE;
+ shift = j % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift)) {
entry = reta_conf[idx].reta[shift];
fid = entry * edev->num_hwfns + i;
@@ -2289,15 +2289,15 @@ static int qede_rss_reta_query(struct rte_eth_dev *eth_dev,
uint16_t i, idx, shift;
uint8_t entry;
- if (reta_size > ETH_RSS_RETA_SIZE_128) {
+ if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
DP_ERR(edev, "reta_size %d is not supported\n",
reta_size);
return -EINVAL;
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if (reta_conf[idx].mask & (1ULL << shift)) {
entry = qdev->rss_ind_table[i];
reta_conf[idx].reta[shift] = entry;
@@ -2369,9 +2369,9 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
}
}
if (frame_size > QEDE_ETH_MAX_LEN)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
if (!dev->data->dev_started && restart) {
qede_dev_start(dev);
diff --git a/drivers/net/qede/qede_filter.c b/drivers/net/qede/qede_filter.c
index c756594bfc4b..ceb47c17d0d6 100644
--- a/drivers/net/qede/qede_filter.c
+++ b/drivers/net/qede/qede_filter.c
@@ -144,7 +144,7 @@ int qede_check_fdir_support(struct rte_eth_dev *eth_dev)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
- struct rte_fdir_conf *fdir = ð_dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fdir = ð_dev->data->dev_conf.fdir_conf;
/* check FDIR modes */
switch (fdir->mode) {
@@ -542,7 +542,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
memset(&tunn, 0, sizeof(tunn));
switch (tunnel_udp->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (qdev->vxlan.udp_port != tunnel_udp->udp_port) {
DP_ERR(edev, "UDP port %u doesn't exist\n",
tunnel_udp->udp_port);
@@ -570,7 +570,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
ECORE_TUNN_CLSS_MAC_VLAN, false);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (qdev->geneve.udp_port != tunnel_udp->udp_port) {
DP_ERR(edev, "UDP port %u doesn't exist\n",
tunnel_udp->udp_port);
@@ -622,7 +622,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
memset(&tunn, 0, sizeof(tunn));
switch (tunnel_udp->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (qdev->vxlan.udp_port == tunnel_udp->udp_port) {
DP_INFO(edev,
"UDP port %u for VXLAN was already configured\n",
@@ -659,7 +659,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
qdev->vxlan.udp_port = udp_port;
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (qdev->geneve.udp_port == tunnel_udp->udp_port) {
DP_INFO(edev,
"UDP port %u for GENEVE was already configured\n",
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 298f4e3e4273..144dfef269f3 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -249,7 +249,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
/* cache align the mbuf size to simplfy rx_buf_size calculation */
bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
- if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
(max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) {
if (!dev->data->scattered_rx) {
DP_INFO(edev, "Forcing scatter-gather mode\n");
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index c9334448c887..15112b83f4f7 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -73,14 +73,14 @@
#define QEDE_MAX_ETHER_HDR_LEN (RTE_ETHER_HDR_LEN + QEDE_ETH_OVERHEAD)
#define QEDE_ETH_MAX_LEN (RTE_ETHER_MTU + QEDE_MAX_ETHER_HDR_LEN)
-#define QEDE_RSS_OFFLOAD_ALL (ETH_RSS_IPV4 |\
- ETH_RSS_NONFRAG_IPV4_TCP |\
- ETH_RSS_NONFRAG_IPV4_UDP |\
- ETH_RSS_IPV6 |\
- ETH_RSS_NONFRAG_IPV6_TCP |\
- ETH_RSS_NONFRAG_IPV6_UDP |\
- ETH_RSS_VXLAN |\
- ETH_RSS_GENEVE)
+#define QEDE_RSS_OFFLOAD_ALL (RTE_ETH_RSS_IPV4 |\
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |\
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |\
+ RTE_ETH_RSS_IPV6 |\
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |\
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |\
+ RTE_ETH_RSS_VXLAN |\
+ RTE_ETH_RSS_GENEVE)
#define QEDE_RXTX_MAX(qdev) \
(RTE_MAX(qdev->num_rx_queues, qdev->num_tx_queues))
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 1faf38a714cf..8d1ef5fb22bc 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -56,10 +56,10 @@ struct pmd_internals {
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(eth_ring_logtype, NOTICE);
@@ -102,7 +102,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused) { return 0; }
static int
eth_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -110,21 +110,21 @@ static int
eth_dev_stop(struct rte_eth_dev *dev)
{
dev->data->dev_started = 0;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
static int
eth_dev_set_link_down(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
static int
eth_dev_set_link_up(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -163,8 +163,8 @@ eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_mac_addrs = 1;
dev_info->max_rx_pktlen = (uint32_t)-1;
dev_info->max_rx_queues = (uint16_t)internals->max_rx_queues;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->max_tx_queues = (uint16_t)internals->max_tx_queues;
dev_info->min_rx_bufsize = 0;
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 274a98e228e4..d93f9d2418b9 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -81,13 +81,13 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
{
uint32_t phy_caps = 0;
- if (~speeds & ETH_LINK_SPEED_FIXED) {
+ if (~speeds & RTE_ETH_LINK_SPEED_FIXED) {
phy_caps |= (1 << EFX_PHY_CAP_AN);
/*
* If no speeds are specified in the mask, any supported
* may be negotiated
*/
- if (speeds == ETH_LINK_SPEED_AUTONEG)
+ if (speeds == RTE_ETH_LINK_SPEED_AUTONEG)
phy_caps |=
(1 << EFX_PHY_CAP_1000FDX) |
(1 << EFX_PHY_CAP_10000FDX) |
@@ -96,17 +96,17 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
(1 << EFX_PHY_CAP_50000FDX) |
(1 << EFX_PHY_CAP_100000FDX);
}
- if (speeds & ETH_LINK_SPEED_1G)
+ if (speeds & RTE_ETH_LINK_SPEED_1G)
phy_caps |= (1 << EFX_PHY_CAP_1000FDX);
- if (speeds & ETH_LINK_SPEED_10G)
+ if (speeds & RTE_ETH_LINK_SPEED_10G)
phy_caps |= (1 << EFX_PHY_CAP_10000FDX);
- if (speeds & ETH_LINK_SPEED_25G)
+ if (speeds & RTE_ETH_LINK_SPEED_25G)
phy_caps |= (1 << EFX_PHY_CAP_25000FDX);
- if (speeds & ETH_LINK_SPEED_40G)
+ if (speeds & RTE_ETH_LINK_SPEED_40G)
phy_caps |= (1 << EFX_PHY_CAP_40000FDX);
- if (speeds & ETH_LINK_SPEED_50G)
+ if (speeds & RTE_ETH_LINK_SPEED_50G)
phy_caps |= (1 << EFX_PHY_CAP_50000FDX);
- if (speeds & ETH_LINK_SPEED_100G)
+ if (speeds & RTE_ETH_LINK_SPEED_100G)
phy_caps |= (1 << EFX_PHY_CAP_100000FDX);
return phy_caps;
@@ -337,10 +337,10 @@ sfc_set_fw_subvariant(struct sfc_adapter *sa)
tx_offloads |= txq_info->offloads;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_DEFAULT;
else
req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_NO_TX_CSUM;
@@ -827,7 +827,7 @@ sfc_attach(struct sfc_adapter *sa)
sa->priv.shared->tunnel_encaps =
encp->enc_tunnel_encapsulations_supported;
- if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
sa->tso = encp->enc_fw_assisted_tso_v2_enabled ||
encp->enc_tso_v3_enabled;
if (!sa->tso)
@@ -836,8 +836,8 @@ sfc_attach(struct sfc_adapter *sa)
if (sa->tso &&
(sfc_dp_tx_offload_capa(sa->priv.dp_tx) &
- (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
+ (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
sa->tso_encap = encp->enc_fw_assisted_tso_v2_encap_enabled ||
encp->enc_tso_v3_enabled;
if (!sa->tso_encap)
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index d4cb96881cd2..ca8774ad0950 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -916,11 +916,11 @@ struct sfc_dp_rx sfc_ef100_rx = {
.features = SFC_DP_RX_FEAT_MULTI_PROCESS |
SFC_DP_RX_FEAT_INTR,
.dev_offload_capa = 0,
- .queue_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH,
+ .queue_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
.get_dev_info = sfc_ef100_rx_get_dev_info,
.qsize_up_rings = sfc_ef100_rx_qsize_up_rings,
.qcreate = sfc_ef100_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 522e9a0d3470..7c91ee3fcb53 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -942,16 +942,16 @@ struct sfc_dp_tx sfc_ef100_tx = {
},
.features = SFC_DP_TX_FEAT_MULTI_PROCESS,
.dev_offload_capa = 0,
- .queue_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
.get_dev_info = sfc_ef100_get_dev_info,
.qsize_up_rings = sfc_ef100_tx_qsize_up_rings,
.qcreate = sfc_ef100_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 991329e86f01..9ea207cca163 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -746,8 +746,8 @@ struct sfc_dp_rx sfc_ef10_essb_rx = {
},
.features = SFC_DP_RX_FEAT_FLOW_FLAG |
SFC_DP_RX_FEAT_FLOW_MARK,
- .dev_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_RSS_HASH,
+ .dev_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
.queue_offload_capa = 0,
.get_dev_info = sfc_ef10_essb_rx_get_dev_info,
.pool_ops_supported = sfc_ef10_essb_rx_pool_ops_supported,
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 49a7d4fb42fd..9aaabd30eee6 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -819,10 +819,10 @@ struct sfc_dp_rx sfc_ef10_rx = {
},
.features = SFC_DP_RX_FEAT_MULTI_PROCESS |
SFC_DP_RX_FEAT_INTR,
- .dev_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH,
- .queue_offload_capa = DEV_RX_OFFLOAD_SCATTER,
+ .dev_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
+ .queue_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER,
.get_dev_info = sfc_ef10_rx_get_dev_info,
.qsize_up_rings = sfc_ef10_rx_qsize_up_rings,
.qcreate = sfc_ef10_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index ed43adb4ca5c..e7da4608bcb0 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -958,9 +958,9 @@ sfc_ef10_tx_qcreate(uint16_t port_id, uint16_t queue_id,
if (txq->sw_ring == NULL)
goto fail_sw_ring_alloc;
- if (info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) {
+ if (info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) {
txq->tsoh = rte_calloc_socket("sfc-ef10-txq-tsoh",
info->txq_entries,
SFC_TSOH_STD_LEN,
@@ -1125,14 +1125,14 @@ struct sfc_dp_tx sfc_ef10_tx = {
.hw_fw_caps = SFC_DP_HW_FW_CAP_EF10,
},
.features = SFC_DP_TX_FEAT_MULTI_PROCESS,
- .dev_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS,
- .queue_offload_capa = DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+ .dev_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
.get_dev_info = sfc_ef10_get_dev_info,
.qsize_up_rings = sfc_ef10_tx_qsize_up_rings,
.qcreate = sfc_ef10_tx_qcreate,
@@ -1152,11 +1152,11 @@ struct sfc_dp_tx sfc_ef10_simple_tx = {
.type = SFC_DP_TX,
},
.features = SFC_DP_TX_FEAT_MULTI_PROCESS,
- .dev_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
- .queue_offload_capa = DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM,
+ .dev_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM,
.get_dev_info = sfc_ef10_get_dev_info,
.qsize_up_rings = sfc_ef10_tx_qsize_up_rings,
.qcreate = sfc_ef10_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3ad..33f800c46e59 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -102,19 +102,19 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_vfs = sa->sriov.num_vfs;
/* Autonegotiation may be disabled */
- dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_1000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_1G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_10000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_10G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_25000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_25G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_40000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_50000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_100000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
dev_info->max_rx_queues = sa->rxq_max;
dev_info->max_tx_queues = sa->txq_max;
@@ -142,8 +142,8 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->tx_offload_capa = sfc_tx_get_dev_offload_caps(sa) |
dev_info->tx_queue_offload_capa;
- if (dev_info->tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
- txq_offloads_def |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ if (dev_info->tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+ txq_offloads_def |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->default_txconf.offloads |= txq_offloads_def;
@@ -912,16 +912,16 @@ sfc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
switch (link_fc) {
case 0:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
break;
case EFX_FCNTL_RESPOND:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case EFX_FCNTL_GENERATE:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case (EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE):
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
default:
sfc_err(sa, "%s: unexpected flow control value %#x",
@@ -952,16 +952,16 @@ sfc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
fcntl = 0;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
fcntl = EFX_FCNTL_RESPOND;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
fcntl = EFX_FCNTL_GENERATE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
fcntl = EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE;
break;
default:
@@ -1070,7 +1070,7 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
*/
if (mtu > RTE_ETHER_MTU) {
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
}
dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu;
@@ -1247,7 +1247,7 @@ sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
qinfo->conf.rx_deferred_start = rxq_info->deferred_start;
qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads;
if (rxq_info->type_flags & EFX_RXQ_FLAG_SCATTER) {
- qinfo->conf.offloads |= DEV_RX_OFFLOAD_SCATTER;
+ qinfo->conf.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
qinfo->scattered_rx = 1;
}
qinfo->nb_desc = rxq_info->entries;
@@ -1472,9 +1472,9 @@ static efx_tunnel_protocol_t
sfc_tunnel_rte_type_to_efx_udp_proto(enum rte_eth_tunnel_type rte_type)
{
switch (rte_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
return EFX_TUNNEL_PROTOCOL_VXLAN;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
return EFX_TUNNEL_PROTOCOL_GENEVE;
default:
return EFX_TUNNEL_NPROTOS;
@@ -1601,7 +1601,7 @@ sfc_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
/*
* Mapping of hash configuration between RTE and EFX is not one-to-one,
- * hence, conversion is done here to derive a correct set of ETH_RSS
+ * hence, conversion is done here to derive a correct set of RTE_ETH_RSS
* flags which corresponds to the active EFX configuration stored
* locally in 'sfc_adapter' and kept up-to-date
*/
@@ -1727,8 +1727,8 @@ sfc_dev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
for (entry = 0; entry < reta_size; entry++) {
- int grp = entry / RTE_RETA_GROUP_SIZE;
- int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+ int grp = entry / RTE_ETH_RETA_GROUP_SIZE;
+ int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
if ((reta_conf[grp].mask >> grp_idx) & 1)
reta_conf[grp].reta[grp_idx] = rss->tbl[entry];
@@ -1777,10 +1777,10 @@ sfc_dev_rss_reta_update(struct rte_eth_dev *dev,
rte_memcpy(rss_tbl_new, rss->tbl, sizeof(rss->tbl));
for (entry = 0; entry < reta_size; entry++) {
- int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+ int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
struct rte_eth_rss_reta_entry64 *grp;
- grp = &reta_conf[entry / RTE_RETA_GROUP_SIZE];
+ grp = &reta_conf[entry / RTE_ETH_RETA_GROUP_SIZE];
if (grp->mask & (1ull << grp_idx)) {
if (grp->reta[grp_idx] >= rss->channels) {
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 4f5993a68d23..dc2cdfea13c4 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -390,7 +390,7 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
const struct rte_flow_item_vlan *spec = NULL;
const struct rte_flow_item_vlan *mask = NULL;
const struct rte_flow_item_vlan supp_mask = {
- .tci = rte_cpu_to_be_16(ETH_VLAN_ID_MAX),
+ .tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
.inner_type = RTE_BE16(0xffff),
};
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index adb2b2cb8175..dea5272a79bc 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -387,7 +387,7 @@ sfc_port_configure(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
port->pdu = rxmode->max_rx_pkt_len;
else
port->pdu = EFX_MAC_PDU(dev_data->mtu);
@@ -577,66 +577,66 @@ sfc_port_link_mode_to_info(efx_link_mode_t link_mode,
memset(link_info, 0, sizeof(*link_info));
if ((link_mode == EFX_LINK_DOWN) || (link_mode == EFX_LINK_UNKNOWN))
- link_info->link_status = ETH_LINK_DOWN;
+ link_info->link_status = RTE_ETH_LINK_DOWN;
else
- link_info->link_status = ETH_LINK_UP;
+ link_info->link_status = RTE_ETH_LINK_UP;
switch (link_mode) {
case EFX_LINK_10HDX:
- link_info->link_speed = ETH_SPEED_NUM_10M;
- link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_10M;
+ link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case EFX_LINK_10FDX:
- link_info->link_speed = ETH_SPEED_NUM_10M;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_10M;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_100HDX:
- link_info->link_speed = ETH_SPEED_NUM_100M;
- link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_100M;
+ link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case EFX_LINK_100FDX:
- link_info->link_speed = ETH_SPEED_NUM_100M;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_100M;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_1000HDX:
- link_info->link_speed = ETH_SPEED_NUM_1G;
- link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_1G;
+ link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case EFX_LINK_1000FDX:
- link_info->link_speed = ETH_SPEED_NUM_1G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_1G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_10000FDX:
- link_info->link_speed = ETH_SPEED_NUM_10G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_10G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_25000FDX:
- link_info->link_speed = ETH_SPEED_NUM_25G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_25G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_40000FDX:
- link_info->link_speed = ETH_SPEED_NUM_40G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_40G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_50000FDX:
- link_info->link_speed = ETH_SPEED_NUM_50G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_50G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_100000FDX:
- link_info->link_speed = ETH_SPEED_NUM_100G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_100G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
default:
SFC_ASSERT(B_FALSE);
/* FALLTHROUGH */
case EFX_LINK_UNKNOWN:
case EFX_LINK_DOWN:
- link_info->link_speed = ETH_SPEED_NUM_NONE;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_NONE;
link_info->link_duplex = 0;
break;
}
- link_info->link_autoneg = ETH_LINK_AUTONEG;
+ link_info->link_autoneg = RTE_ETH_LINK_AUTONEG;
}
int
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 280e8a61f9e0..a83b47a8d111 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -647,9 +647,9 @@ struct sfc_dp_rx sfc_efx_rx = {
.hw_fw_caps = SFC_DP_HW_FW_CAP_RX_EFX,
},
.features = SFC_DP_RX_FEAT_INTR,
- .dev_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_RSS_HASH,
- .queue_offload_capa = DEV_RX_OFFLOAD_SCATTER,
+ .dev_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
+ .queue_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER,
.qsize_up_rings = sfc_efx_rx_qsize_up_rings,
.qcreate = sfc_efx_rx_qcreate,
.qdestroy = sfc_efx_rx_qdestroy,
@@ -930,7 +930,7 @@ sfc_rx_get_offload_mask(struct sfc_adapter *sa)
uint64_t no_caps = 0;
if (encp->enc_tunnel_encapsulations_supported == 0)
- no_caps |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ no_caps |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
return ~no_caps;
}
@@ -940,7 +940,7 @@ sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa)
{
uint64_t caps = sa->priv.dp_rx->dev_offload_capa;
- caps |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ caps |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
return caps & sfc_rx_get_offload_mask(sa);
}
@@ -1141,7 +1141,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
if (!sfc_rx_check_scatter(sa->port.pdu, buf_size,
encp->enc_rx_prefix_size,
- (offloads & DEV_RX_OFFLOAD_SCATTER),
+ (offloads & RTE_ETH_RX_OFFLOAD_SCATTER),
encp->enc_rx_scatter_max,
&error)) {
sfc_err(sa, "RxQ %d (internal %u) MTU check failed: %s",
@@ -1167,15 +1167,15 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
rxq_info->type = EFX_RXQ_TYPE_DEFAULT;
rxq_info->type_flags |=
- (offloads & DEV_RX_OFFLOAD_SCATTER) ?
+ (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ?
EFX_RXQ_FLAG_SCATTER : EFX_RXQ_FLAG_NONE;
if ((encp->enc_tunnel_encapsulations_supported != 0) &&
(sfc_dp_rx_offload_capa(sa->priv.dp_rx) &
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
rxq_info->type_flags |= EFX_RXQ_FLAG_INNER_CLASSES;
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH)
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)
rxq_info->type_flags |= EFX_RXQ_FLAG_RSS_HASH;
rc = sfc_ev_qinit(sa, SFC_EVQ_TYPE_RX, sw_index,
@@ -1205,7 +1205,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
rxq_info->refill_mb_pool = mb_pool;
if (rss->hash_support == EFX_RX_HASH_AVAILABLE && rss->channels > 0 &&
- (offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
rxq_info->rxq_flags = SFC_RXQ_FLAG_RSS_HASH;
else
rxq_info->rxq_flags = 0;
@@ -1301,19 +1301,19 @@ sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
* Mapping between RTE RSS hash functions and their EFX counterparts.
*/
static const struct sfc_rss_hf_rte_to_efx sfc_rss_hf_map[] = {
- { ETH_RSS_NONFRAG_IPV4_TCP,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP,
EFX_RX_HASH(IPV4_TCP, 4TUPLE) },
- { ETH_RSS_NONFRAG_IPV4_UDP,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP,
EFX_RX_HASH(IPV4_UDP, 4TUPLE) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX,
EFX_RX_HASH(IPV6_TCP, 4TUPLE) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX,
EFX_RX_HASH(IPV6_UDP, 4TUPLE) },
- { ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER,
+ { RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
EFX_RX_HASH(IPV4_TCP, 2TUPLE) | EFX_RX_HASH(IPV4_UDP, 2TUPLE) |
EFX_RX_HASH(IPV4, 2TUPLE) },
- { ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER |
- ETH_RSS_IPV6_EX,
+ { RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_IPV6_EX,
EFX_RX_HASH(IPV6_TCP, 2TUPLE) | EFX_RX_HASH(IPV6_UDP, 2TUPLE) |
EFX_RX_HASH(IPV6, 2TUPLE) }
};
@@ -1633,10 +1633,10 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
int rc = 0;
switch (rxmode->mq_mode) {
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/* No special checks are required */
break;
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
if (rss->context_type == EFX_RX_SCALE_UNAVAILABLE) {
sfc_err(sa, "RSS is not available");
rc = EINVAL;
@@ -1653,16 +1653,16 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
* so unsupported offloads cannot be added as the result of
* below check.
*/
- if ((rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM) !=
- (offloads_supported & DEV_RX_OFFLOAD_CHECKSUM)) {
+ if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM) !=
+ (offloads_supported & RTE_ETH_RX_OFFLOAD_CHECKSUM)) {
sfc_warn(sa, "Rx checksum offloads cannot be disabled - always on (IPv4/TCP/UDP)");
- rxmode->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
}
- if ((offloads_supported & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
- (~rxmode->offloads & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ if ((offloads_supported & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+ (~rxmode->offloads & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
sfc_warn(sa, "Rx outer IPv4 checksum offload cannot be disabled - always on");
- rxmode->offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
}
return rc;
@@ -1808,7 +1808,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
}
configure_rss:
- rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
+ rss->channels = (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) ?
MIN(sas->ethdev_rxq_count, EFX_MAXRSS) : 0;
if (rss->channels > 0) {
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 49b239f4d261..359acc71a47f 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -54,23 +54,23 @@ sfc_tx_get_offload_mask(struct sfc_adapter *sa)
uint64_t no_caps = 0;
if (!encp->enc_hw_tx_insert_vlan_enabled)
- no_caps |= DEV_TX_OFFLOAD_VLAN_INSERT;
+ no_caps |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if (!encp->enc_tunnel_encapsulations_supported)
- no_caps |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ no_caps |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (!sa->tso)
- no_caps |= DEV_TX_OFFLOAD_TCP_TSO;
+ no_caps |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (!sa->tso_encap ||
(encp->enc_tunnel_encapsulations_supported &
(1u << EFX_TUNNEL_PROTOCOL_VXLAN)) == 0)
- no_caps |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+ no_caps |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
if (!sa->tso_encap ||
(encp->enc_tunnel_encapsulations_supported &
(1u << EFX_TUNNEL_PROTOCOL_GENEVE)) == 0)
- no_caps |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ no_caps |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
return ~no_caps;
}
@@ -114,8 +114,8 @@ sfc_tx_qcheck_conf(struct sfc_adapter *sa, unsigned int txq_max_fill_level,
}
/* We either perform both TCP and UDP offload, or no offload at all */
- if (((offloads & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) !=
- ((offloads & DEV_TX_OFFLOAD_UDP_CKSUM) == 0)) {
+ if (((offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) !=
+ ((offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0)) {
sfc_err(sa, "TCP and UDP offloads can't be set independently");
rc = EINVAL;
}
@@ -309,7 +309,7 @@ sfc_tx_check_mode(struct sfc_adapter *sa, const struct rte_eth_txmode *txmode)
int rc = 0;
switch (txmode->mq_mode) {
- case ETH_MQ_TX_NONE:
+ case RTE_ETH_MQ_TX_NONE:
break;
default:
sfc_err(sa, "Tx multi-queue mode %u not supported",
@@ -515,23 +515,23 @@ sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
if (rc != 0)
goto fail_ev_qstart;
- if (txq_info->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
flags |= EFX_TXQ_CKSUM_IPV4;
- if (txq_info->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
flags |= EFX_TXQ_CKSUM_INNER_IPV4;
- if ((txq_info->offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
- (txq_info->offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+ if ((txq_info->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+ (txq_info->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
flags |= EFX_TXQ_CKSUM_TCPUDP;
- if (offloads_supported & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ if (offloads_supported & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
flags |= EFX_TXQ_CKSUM_INNER_TCPUDP;
}
- if (txq_info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+ if (txq_info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
flags |= EFX_TXQ_FATSOV2;
rc = efx_tx_qcreate(sa->nic, txq->hw_index, 0, &txq->mem,
@@ -862,9 +862,9 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/*
* Here VLAN TCI is expected to be zero in case if no
- * DEV_TX_OFFLOAD_VLAN_INSERT capability is advertised;
+ * RTE_ETH_TX_OFFLOAD_VLAN_INSERT capability is advertised;
* if the calling app ignores the absence of
- * DEV_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
+ * RTE_ETH_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
* TX_ERROR will occur
*/
pkt_descs += sfc_efx_tx_maybe_insert_tag(txq, m_seg, &pend);
@@ -1228,13 +1228,13 @@ struct sfc_dp_tx sfc_efx_tx = {
.hw_fw_caps = SFC_DP_HW_FW_CAP_TX_EFX,
},
.features = 0,
- .dev_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_MULTI_SEGS,
- .queue_offload_capa = DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO,
+ .dev_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO,
.qsize_up_rings = sfc_efx_tx_qsize_up_rings,
.qcreate = sfc_efx_tx_qcreate,
.qdestroy = sfc_efx_tx_qdestroy,
diff --git a/drivers/net/softnic/rte_eth_softnic.c b/drivers/net/softnic/rte_eth_softnic.c
index b3b55b9035b1..3ef33818a9e0 100644
--- a/drivers/net/softnic/rte_eth_softnic.c
+++ b/drivers/net/softnic/rte_eth_softnic.c
@@ -173,7 +173,7 @@ pmd_dev_start(struct rte_eth_dev *dev)
return status;
/* Link UP */
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -184,7 +184,7 @@ pmd_dev_stop(struct rte_eth_dev *dev)
struct pmd_internals *p = dev->data->dev_private;
/* Link DOWN */
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
/* Firmware */
softnic_pipeline_disable_all(p);
@@ -386,10 +386,10 @@ pmd_ethdev_register(struct rte_vdev_device *vdev,
/* dev->data */
dev->data->dev_private = dev_private;
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_100G;
- dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
- dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
dev->data->mac_addrs = ð_addr;
dev->data->promiscuous = 1;
dev->data->numa_node = params->cpu_id;
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c
index 7416a6b1b816..255444a4181d 100644
--- a/drivers/net/szedata2/rte_eth_szedata2.c
+++ b/drivers/net/szedata2/rte_eth_szedata2.c
@@ -1042,7 +1042,7 @@ static int
eth_dev_configure(struct rte_eth_dev *dev)
{
struct rte_eth_dev_data *data = dev->data;
- if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
dev->rx_pkt_burst = eth_szedata2_rx_scattered;
data->scattered_rx = 1;
} else {
@@ -1064,11 +1064,11 @@ eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_rx_queues = internals->max_rx_queues;
dev_info->max_tx_queues = internals->max_tx_queues;
dev_info->min_rx_bufsize = 0;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
dev_info->tx_offload_capa = 0;
dev_info->rx_queue_offload_capa = 0;
dev_info->tx_queue_offload_capa = 0;
- dev_info->speed_capa = ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -1204,10 +1204,10 @@ eth_link_update(struct rte_eth_dev *dev,
memset(&link, 0, sizeof(link));
- link.link_speed = ETH_SPEED_NUM_100G;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_status = ETH_LINK_UP;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
rte_eth_linkstatus_set(dev, &link);
return 0;
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index c515de3bf71d..ad5980ef5280 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -70,16 +70,16 @@
#define TAP_IOV_DEFAULT_MAX 1024
-#define TAP_RX_OFFLOAD (DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM)
+#define TAP_RX_OFFLOAD (RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
-#define TAP_TX_OFFLOAD (DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO)
+#define TAP_TX_OFFLOAD (RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
static int tap_devices_count;
@@ -97,10 +97,10 @@ static const char *valid_arguments[] = {
static volatile uint32_t tap_trigger; /* Rx trigger */
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
static void
@@ -433,7 +433,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
len = readv(process_private->rxq_fds[rxq->queue_id],
*rxq->iovecs,
- 1 + (rxq->rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ?
+ 1 + (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ?
rxq->nb_rx_desc : 1));
if (len < (int)sizeof(struct tun_pi))
break;
@@ -489,7 +489,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
seg->next = NULL;
mbuf->packet_type = rte_net_get_ptype(mbuf, NULL,
RTE_PTYPE_ALL_MASK);
- if (rxq->rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
tap_verify_csum(mbuf);
/* account for the receive frame */
@@ -866,7 +866,7 @@ tap_link_set_down(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct ifreq ifr = { .ifr_flags = IFF_UP };
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 0, LOCAL_ONLY);
}
@@ -876,7 +876,7 @@ tap_link_set_up(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct ifreq ifr = { .ifr_flags = IFF_UP };
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 1, LOCAL_AND_REMOTE);
}
@@ -956,30 +956,30 @@ tap_dev_speed_capa(void)
uint32_t speed = pmd_link.link_speed;
uint32_t capa = 0;
- if (speed >= ETH_SPEED_NUM_10M)
- capa |= ETH_LINK_SPEED_10M;
- if (speed >= ETH_SPEED_NUM_100M)
- capa |= ETH_LINK_SPEED_100M;
- if (speed >= ETH_SPEED_NUM_1G)
- capa |= ETH_LINK_SPEED_1G;
- if (speed >= ETH_SPEED_NUM_5G)
- capa |= ETH_LINK_SPEED_2_5G;
- if (speed >= ETH_SPEED_NUM_5G)
- capa |= ETH_LINK_SPEED_5G;
- if (speed >= ETH_SPEED_NUM_10G)
- capa |= ETH_LINK_SPEED_10G;
- if (speed >= ETH_SPEED_NUM_20G)
- capa |= ETH_LINK_SPEED_20G;
- if (speed >= ETH_SPEED_NUM_25G)
- capa |= ETH_LINK_SPEED_25G;
- if (speed >= ETH_SPEED_NUM_40G)
- capa |= ETH_LINK_SPEED_40G;
- if (speed >= ETH_SPEED_NUM_50G)
- capa |= ETH_LINK_SPEED_50G;
- if (speed >= ETH_SPEED_NUM_56G)
- capa |= ETH_LINK_SPEED_56G;
- if (speed >= ETH_SPEED_NUM_100G)
- capa |= ETH_LINK_SPEED_100G;
+ if (speed >= RTE_ETH_SPEED_NUM_10M)
+ capa |= RTE_ETH_LINK_SPEED_10M;
+ if (speed >= RTE_ETH_SPEED_NUM_100M)
+ capa |= RTE_ETH_LINK_SPEED_100M;
+ if (speed >= RTE_ETH_SPEED_NUM_1G)
+ capa |= RTE_ETH_LINK_SPEED_1G;
+ if (speed >= RTE_ETH_SPEED_NUM_5G)
+ capa |= RTE_ETH_LINK_SPEED_2_5G;
+ if (speed >= RTE_ETH_SPEED_NUM_5G)
+ capa |= RTE_ETH_LINK_SPEED_5G;
+ if (speed >= RTE_ETH_SPEED_NUM_10G)
+ capa |= RTE_ETH_LINK_SPEED_10G;
+ if (speed >= RTE_ETH_SPEED_NUM_20G)
+ capa |= RTE_ETH_LINK_SPEED_20G;
+ if (speed >= RTE_ETH_SPEED_NUM_25G)
+ capa |= RTE_ETH_LINK_SPEED_25G;
+ if (speed >= RTE_ETH_SPEED_NUM_40G)
+ capa |= RTE_ETH_LINK_SPEED_40G;
+ if (speed >= RTE_ETH_SPEED_NUM_50G)
+ capa |= RTE_ETH_LINK_SPEED_50G;
+ if (speed >= RTE_ETH_SPEED_NUM_56G)
+ capa |= RTE_ETH_LINK_SPEED_56G;
+ if (speed >= RTE_ETH_SPEED_NUM_100G)
+ capa |= RTE_ETH_LINK_SPEED_100G;
return capa;
}
@@ -1196,15 +1196,15 @@ tap_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, REMOTE_ONLY);
if (!(ifr.ifr_flags & IFF_UP) ||
!(ifr.ifr_flags & IFF_RUNNING)) {
- dev_link->link_status = ETH_LINK_DOWN;
+ dev_link->link_status = RTE_ETH_LINK_DOWN;
return 0;
}
}
tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, LOCAL_ONLY);
dev_link->link_status =
((ifr.ifr_flags & IFF_UP) && (ifr.ifr_flags & IFF_RUNNING) ?
- ETH_LINK_UP :
- ETH_LINK_DOWN);
+ RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN);
return 0;
}
@@ -1391,7 +1391,7 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx, struct rte_eth_dev *dev)
int ret;
/* initialize GSO context */
- gso_types = DEV_TX_OFFLOAD_TCP_TSO;
+ gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (!pmd->gso_ctx_mp) {
/*
* Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
@@ -1606,9 +1606,9 @@ tap_tx_queue_setup(struct rte_eth_dev *dev,
offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
txq->csum = !!(offloads &
- (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM));
+ (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM));
ret = tap_setup_queue(dev, internals, tx_queue_id, 0);
if (ret == -1)
@@ -1765,7 +1765,7 @@ static int
tap_flow_ctrl_get(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_fc_conf *fc_conf)
{
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -1773,7 +1773,7 @@ static int
tap_flow_ctrl_set(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_fc_conf *fc_conf)
{
- if (fc_conf->mode != RTE_FC_NONE)
+ if (fc_conf->mode != RTE_ETH_FC_NONE)
return -ENOTSUP;
return 0;
}
@@ -2267,7 +2267,7 @@ rte_pmd_tun_probe(struct rte_vdev_device *dev)
}
}
}
- pmd_link.link_speed = ETH_SPEED_NUM_10G;
+ pmd_link.link_speed = RTE_ETH_SPEED_NUM_10G;
TAP_LOG(DEBUG, "Initializing pmd_tun for %s", name);
@@ -2441,7 +2441,7 @@ rte_pmd_tap_probe(struct rte_vdev_device *dev)
return 0;
}
- speed = ETH_SPEED_NUM_10G;
+ speed = RTE_ETH_SPEED_NUM_10G;
/* use tap%d which causes kernel to choose next available */
strlcpy(tap_name, DEFAULT_TAP_NAME "%d", RTE_ETH_NAME_MAX_LEN);
--git a/drivers/net/tap/tap_rss.h b/drivers/net/tap/tap_rss.h
index 176e7180bdaa..48c151cf6b68 100644
--- a/drivers/net/tap/tap_rss.h
+++ b/drivers/net/tap/tap_rss.h
@@ -13,7 +13,7 @@
#define TAP_RSS_HASH_KEY_SIZE 40
/* Supported RSS */
-#define TAP_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP))
+#define TAP_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP))
/* hashed fields for RSS */
enum hash_field {
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index fc1844ddfce1..8d02fbae7274 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -61,14 +61,14 @@ nicvf_link_status_update(struct nicvf *nic,
{
memset(link, 0, sizeof(*link));
- link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
if (nic->duplex == NICVF_HALF_DUPLEX)
- link->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
else if (nic->duplex == NICVF_FULL_DUPLEX)
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link->link_speed = nic->speed;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
}
static void
@@ -134,7 +134,7 @@ nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
/* rte_eth_link_get() might need to wait up to 9 seconds */
for (i = 0; i < MAX_CHECK_TIME; i++) {
nicvf_link_status_update(nic, &link);
- if (link.link_status == ETH_LINK_UP)
+ if (link.link_status == RTE_ETH_LINK_UP)
break;
rte_delay_ms(CHECK_INTERVAL);
}
@@ -177,9 +177,9 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
if (frame_size > NIC_HW_L2_MAX_LEN)
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
- rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
if (nicvf_mbox_update_hw_max_frs(nic, mtu))
return -EINVAL;
@@ -404,35 +404,35 @@ nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
{
uint64_t nic_rss = 0;
- if (ethdev_rss & ETH_RSS_IPV4)
+ if (ethdev_rss & RTE_ETH_RSS_IPV4)
nic_rss |= RSS_IP_ENA;
- if (ethdev_rss & ETH_RSS_IPV6)
+ if (ethdev_rss & RTE_ETH_RSS_IPV6)
nic_rss |= RSS_IP_ENA;
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
- if (ethdev_rss & ETH_RSS_PORT)
+ if (ethdev_rss & RTE_ETH_RSS_PORT)
nic_rss |= RSS_L2_EXTENDED_HASH_ENA;
if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
- if (ethdev_rss & ETH_RSS_VXLAN)
+ if (ethdev_rss & RTE_ETH_RSS_VXLAN)
nic_rss |= RSS_TUN_VXLAN_ENA;
- if (ethdev_rss & ETH_RSS_GENEVE)
+ if (ethdev_rss & RTE_ETH_RSS_GENEVE)
nic_rss |= RSS_TUN_GENEVE_ENA;
- if (ethdev_rss & ETH_RSS_NVGRE)
+ if (ethdev_rss & RTE_ETH_RSS_NVGRE)
nic_rss |= RSS_TUN_NVGRE_ENA;
}
@@ -445,28 +445,28 @@ nicvf_rss_nic_to_ethdev(struct nicvf *nic, uint64_t nic_rss)
uint64_t ethdev_rss = 0;
if (nic_rss & RSS_IP_ENA)
- ethdev_rss |= (ETH_RSS_IPV4 | ETH_RSS_IPV6);
+ ethdev_rss |= (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6);
if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_TCP_ENA))
- ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP);
+ ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP);
if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_UDP_ENA))
- ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP);
+ ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP);
if (nic_rss & RSS_L2_EXTENDED_HASH_ENA)
- ethdev_rss |= ETH_RSS_PORT;
+ ethdev_rss |= RTE_ETH_RSS_PORT;
if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
if (nic_rss & RSS_TUN_VXLAN_ENA)
- ethdev_rss |= ETH_RSS_VXLAN;
+ ethdev_rss |= RTE_ETH_RSS_VXLAN;
if (nic_rss & RSS_TUN_GENEVE_ENA)
- ethdev_rss |= ETH_RSS_GENEVE;
+ ethdev_rss |= RTE_ETH_RSS_GENEVE;
if (nic_rss & RSS_TUN_NVGRE_ENA)
- ethdev_rss |= ETH_RSS_NVGRE;
+ ethdev_rss |= RTE_ETH_RSS_NVGRE;
}
return ethdev_rss;
}
@@ -493,8 +493,8 @@ nicvf_dev_reta_query(struct rte_eth_dev *dev,
return ret;
/* Copy RETA table */
- for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
reta_conf[i].reta[j] = tbl[j];
}
@@ -523,8 +523,8 @@ nicvf_dev_reta_update(struct rte_eth_dev *dev,
return ret;
/* Copy RETA table */
- for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
if ((reta_conf[i].mask >> j) & 0x01)
tbl[j] = reta_conf[i].reta[j];
}
@@ -821,9 +821,9 @@ nicvf_configure_rss(struct rte_eth_dev *dev)
dev->data->nb_rx_queues,
dev->data->dev_conf.lpbk_mode, rsshf);
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
ret = nicvf_rss_term(nic);
- else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
ret = nicvf_rss_config(nic, dev->data->nb_rx_queues, rsshf);
if (ret)
PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret);
@@ -884,7 +884,7 @@ nicvf_set_tx_function(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
- if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
multiseg = true;
break;
}
@@ -1007,7 +1007,7 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
txq->offloads = offloads;
- is_single_pool = !!(offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE);
+ is_single_pool = !!(offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE);
/* Choose optimum free threshold value for multipool case */
if (!is_single_pool) {
@@ -1397,11 +1397,11 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
PMD_INIT_FUNC_TRACE();
/* Autonegotiation may be disabled */
- dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
- dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
if (nicvf_hw_version(nic) != PCI_SUB_DEVICE_ID_CN81XX_NICVF)
- dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU;
dev_info->max_rx_pktlen = NIC_HW_MAX_MTU + RTE_ETHER_HDR_LEN;
@@ -1430,10 +1430,10 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->default_txconf = (struct rte_eth_txconf) {
.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
- .offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM,
+ .offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM,
};
return 0;
@@ -1597,8 +1597,8 @@ nicvf_vf_start(struct rte_eth_dev *dev, struct nicvf *nic, uint32_t rbdrsz)
nic->rbdr->tail, nb_rbdr_desc, nic->vf_id);
/* Configure VLAN Strip */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = nicvf_vlan_offload_config(dev, mask);
/* Based on the packet type(IPv4 or IPv6), the nicvf HW aligns L3 data
@@ -1727,11 +1727,11 @@ nicvf_dev_start(struct rte_eth_dev *dev)
if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
2 * VLAN_TAG_SIZE > buffsz)
dev->data->scattered_rx = 1;
- if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
+ if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) != 0)
dev->data->scattered_rx = 1;
/* Setup MTU based on max_rx_pkt_len or default */
- mtu = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ?
+ mtu = dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME ?
dev->data->dev_conf.rxmode.max_rx_pkt_len
- RTE_ETHER_HDR_LEN : RTE_ETHER_MTU;
@@ -1914,8 +1914,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
- rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (!rte_eal_has_hugepages()) {
PMD_INIT_LOG(INFO, "Huge page is not configured");
@@ -1927,8 +1927,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
PMD_INIT_LOG(INFO, "Unsupported rx qmode %d", rxmode->mq_mode);
return -EINVAL;
}
@@ -1938,7 +1938,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_INIT_LOG(INFO, "Setting link speed/duplex not supported");
return -EINVAL;
}
@@ -1973,7 +1973,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
}
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
nic->offload_cksum = 1;
PMD_INIT_LOG(DEBUG, "Configured ethdev port%d hwcap=0x%" PRIx64,
@@ -2050,8 +2050,8 @@ nicvf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
struct nicvf *nic = nicvf_pmd_priv(dev);
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
nicvf_vlan_hw_strip(nic, true);
else
nicvf_vlan_hw_strip(nic, false);
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index b8dd905d0bd6..c1876bb9e1b7 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -16,33 +16,33 @@
#define NICVF_UNKNOWN_DUPLEX 0xff
#define NICVF_RSS_OFFLOAD_PASS1 ( \
- ETH_RSS_PORT | \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_PORT | \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define NICVF_RSS_OFFLOAD_TUNNEL ( \
- ETH_RSS_VXLAN | \
- ETH_RSS_GENEVE | \
- ETH_RSS_NVGRE)
+ RTE_ETH_RSS_VXLAN | \
+ RTE_ETH_RSS_GENEVE | \
+ RTE_ETH_RSS_NVGRE)
#define NICVF_TX_OFFLOAD_CAPA ( \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define NICVF_RX_OFFLOAD_CAPA ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define NICVF_DEFAULT_RX_FREE_THRESH 224
#define NICVF_DEFAULT_TX_FREE_THRESH 224
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 006399468841..c6e8a14ddf3f 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -997,7 +997,7 @@ txgbe_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
rxbal = rd32(hw, TXGBE_RXBAL(rxq->reg_idx));
rxbah = rd32(hw, TXGBE_RXBAH(rxq->reg_idx));
rxcfg = rd32(hw, TXGBE_RXCFG(rxq->reg_idx));
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
restart = (rxcfg & TXGBE_RXCFG_ENA) &&
!(rxcfg & TXGBE_RXCFG_VLAN);
rxcfg |= TXGBE_RXCFG_VLAN;
@@ -1032,7 +1032,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
vlan_ext = (portctrl & TXGBE_PORTCTL_VLANEXT);
qinq = vlan_ext && (portctrl & TXGBE_PORTCTL_QINQ);
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
+ case RTE_ETH_VLAN_TYPE_INNER:
if (vlan_ext) {
wr32m(hw, TXGBE_VLANCTL,
TXGBE_VLANCTL_TPID_MASK,
@@ -1052,7 +1052,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
TXGBE_TAGTPID_LSB(tpid));
}
break;
- case ETH_VLAN_TYPE_OUTER:
+ case RTE_ETH_VLAN_TYPE_OUTER:
if (vlan_ext) {
/* Only the high 16-bits is valid */
wr32m(hw, TXGBE_EXTAG,
@@ -1137,10 +1137,10 @@ txgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
if (on) {
rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
rxq->vlan_flags = PKT_RX_VLAN;
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
@@ -1239,7 +1239,7 @@ txgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
txgbe_vlan_strip_queue_set(dev, i, 1);
else
txgbe_vlan_strip_queue_set(dev, i, 0);
@@ -1253,17 +1253,17 @@ txgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
struct txgbe_rx_queue *rxq;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
rxmode = &dev->data->dev_conf.rxmode;
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
else
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
}
@@ -1274,25 +1274,25 @@ txgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK)
txgbe_vlan_hw_strip_config(dev);
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
txgbe_vlan_hw_filter_enable(dev);
else
txgbe_vlan_hw_filter_disable(dev);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
txgbe_vlan_hw_extend_enable(dev);
else
txgbe_vlan_hw_extend_disable(dev);
}
- if (mask & ETH_QINQ_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+ if (mask & RTE_ETH_QINQ_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
txgbe_qinq_hw_strip_enable(dev);
else
txgbe_qinq_hw_strip_disable(dev);
@@ -1330,10 +1330,10 @@ txgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
switch (nb_rx_q) {
case 1:
case 2:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
break;
case 4:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
break;
default:
return -EINVAL;
@@ -1356,18 +1356,18 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* check multi-queue mode */
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
break;
- case ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
PMD_INIT_LOG(ERR, "SRIOV active,"
" unsupported mq_mode rx %d.",
dev_conf->rxmode.mq_mode);
return -EINVAL;
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
if (txgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -1377,13 +1377,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
break;
- case ETH_MQ_RX_VMDQ_ONLY:
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_NONE:
/* if nothing mq mode configure, use default scheme */
dev->data->dev_conf.rxmode.mq_mode =
- ETH_MQ_RX_VMDQ_ONLY;
+ RTE_ETH_MQ_RX_VMDQ_ONLY;
break;
- default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+ default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
/* SRIOV only works in VMDq enable mode */
PMD_INIT_LOG(ERR, "SRIOV is active,"
" wrong mq_mode rx %d.",
@@ -1392,13 +1392,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
}
switch (dev_conf->txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
- dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+ dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
break;
- default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
+ default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
dev->data->dev_conf.txmode.mq_mode =
- ETH_MQ_TX_VMDQ_ONLY;
+ RTE_ETH_MQ_TX_VMDQ_ONLY;
break;
}
@@ -1413,13 +1413,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
} else {
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
" not supported.");
return -EINVAL;
}
/* check configuration for vmdb+dcb mode */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_conf *conf;
if (nb_rx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1428,15 +1428,15 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools must be %d or %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_tx_conf *conf;
if (nb_tx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1445,39 +1445,39 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools != %d and"
" nb_queue_pools != %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
/* For DCB mode check our configuration before we go further */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
const struct rte_eth_dcb_rx_conf *conf;
conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
const struct rte_eth_dcb_tx_conf *conf;
conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
@@ -1494,8 +1494,8 @@ txgbe_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multiple queue mode checking */
ret = txgbe_check_mq_mode(dev);
@@ -1637,7 +1637,7 @@ txgbe_dev_start(struct rte_eth_dev *dev)
* - half duplex (checked afterwards for valid speeds)
* - fixed speed: TODO implement
*/
- if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_INIT_LOG(ERR,
"Invalid link_speeds for port %u, fix speed not supported",
dev->data->port_id);
@@ -1704,15 +1704,15 @@ txgbe_dev_start(struct rte_eth_dev *dev)
goto error;
}
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = txgbe_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
goto error;
}
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
/* Enable vlan filtering for VMDq */
txgbe_vmdq_vlan_hw_filter_enable(dev);
}
@@ -1773,8 +1773,8 @@ txgbe_dev_start(struct rte_eth_dev *dev)
if (err)
goto error;
- allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G;
link_speeds = &dev->data->dev_conf.link_speeds;
if (*link_speeds & ~allowed_speeds) {
@@ -1783,20 +1783,20 @@ txgbe_dev_start(struct rte_eth_dev *dev)
}
speed = 0x0;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
speed = (TXGBE_LINK_SPEED_100M_FULL |
TXGBE_LINK_SPEED_1GB_FULL |
TXGBE_LINK_SPEED_10GB_FULL);
} else {
- if (*link_speeds & ETH_LINK_SPEED_10G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
speed |= TXGBE_LINK_SPEED_10GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
speed |= TXGBE_LINK_SPEED_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_2_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
speed |= TXGBE_LINK_SPEED_2_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_1G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
speed |= TXGBE_LINK_SPEED_1GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_100M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
speed |= TXGBE_LINK_SPEED_100M_FULL;
}
@@ -2611,7 +2611,7 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->vmdq_queue_num = dev_info->max_rx_queues;
dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
@@ -2644,11 +2644,11 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->tx_desc_lim = tx_desc_lim;
dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
- dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
/* Driver-preferred Rx/Tx parameters */
dev_info->default_rxportconf.burst_size = 32;
@@ -2705,10 +2705,10 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
int wait = 1;
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
hw->mac.get_link_status = true;
@@ -2722,8 +2722,8 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
if (err != 0) {
- link.link_speed = ETH_SPEED_NUM_100M;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -2742,34 +2742,34 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
}
intr->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (link_speed) {
default:
case TXGBE_LINK_SPEED_UNKNOWN:
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case TXGBE_LINK_SPEED_100M_FULL:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case TXGBE_LINK_SPEED_1GB_FULL:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case TXGBE_LINK_SPEED_2_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_2_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case TXGBE_LINK_SPEED_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case TXGBE_LINK_SPEED_10GB_FULL:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
}
@@ -2994,7 +2994,7 @@ txgbe_dev_link_status_print(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3225,13 +3225,13 @@ txgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
tx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -3363,16 +3363,16 @@ txgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
return -ENOTSUP;
}
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
for (i = 0; i < reta_size; i += 4) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
if (!mask)
continue;
@@ -3404,16 +3404,16 @@ txgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
for (i = 0; i < reta_size; i += 4) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
if (!mask)
continue;
@@ -3593,12 +3593,12 @@ txgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
return -ENOTSUP;
if (on) {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = ~0;
wr32(hw, TXGBE_UCADDRTBL(i), ~0);
}
} else {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = 0;
wr32(hw, TXGBE_UCADDRTBL(i), 0);
}
@@ -3622,15 +3622,15 @@ txgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
{
uint32_t new_val = orig_val;
- if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
new_val |= TXGBE_POOLETHCTL_UTA;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
new_val |= TXGBE_POOLETHCTL_MCHA;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
new_val |= TXGBE_POOLETHCTL_UCHA;
- if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
new_val |= TXGBE_POOLETHCTL_BCA;
- if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
new_val |= TXGBE_POOLETHCTL_MCP;
return new_val;
@@ -4281,15 +4281,15 @@ txgbe_start_timecounters(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
switch (link.link_speed) {
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
incval = TXGBE_INCVAL_100;
shift = TXGBE_INCVAL_SHIFT_100;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
incval = TXGBE_INCVAL_1GB;
shift = TXGBE_INCVAL_SHIFT_1GB;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
default:
incval = TXGBE_INCVAL_10GB;
shift = TXGBE_INCVAL_SHIFT_10GB;
@@ -4645,7 +4645,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
uint8_t nb_tcs;
uint8_t i, j;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
else
dcb_info->nb_tcs = 1;
@@ -4656,7 +4656,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
if (dcb_config->vt_mode) { /* vt is enabled */
struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
for (j = 0; j < nb_tcs; j++) {
@@ -4680,9 +4680,9 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
} else { /* vt is disabled */
struct rte_eth_dcb_rx_conf *rx_conf =
&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
- if (dcb_info->nb_tcs == ETH_4_TCS) {
+ if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4695,7 +4695,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
- } else if (dcb_info->nb_tcs == ETH_8_TCS) {
+ } else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4925,7 +4925,7 @@ txgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
}
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = txgbe_e_tag_filter_add(dev, l2_tunnel);
break;
default:
@@ -4956,7 +4956,7 @@ txgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
return ret;
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = txgbe_e_tag_filter_del(dev, l2_tunnel);
break;
default:
@@ -4996,7 +4996,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
ret = -EINVAL;
@@ -5004,7 +5004,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_VXLANPORT, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add Geneve port 0 is not allowed.");
ret = -EINVAL;
@@ -5012,7 +5012,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_GENEVEPORT, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add Teredo port 0 is not allowed.");
ret = -EINVAL;
@@ -5020,7 +5020,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_TEREDOPORT, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
ret = -EINVAL;
@@ -5052,7 +5052,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORT);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5062,7 +5062,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_VXLANPORT, 0);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
cur_port = (uint16_t)rd32(hw, TXGBE_GENEVEPORT);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5072,7 +5072,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_GENEVEPORT, 0);
break;
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
cur_port = (uint16_t)rd32(hw, TXGBE_TEREDOPORT);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5082,7 +5082,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_TEREDOPORT, 0);
break;
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORTGPE);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965c8..75a9e2580e27 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -56,15 +56,15 @@
#define TXGBE_5TUPLE_MIN_PRI 1
#define TXGBE_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define TXGBE_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
#define TXGBE_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 18ed94bd277b..05773cb20786 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -491,14 +491,14 @@ txgbevf_dev_info_get(struct rte_eth_dev *dev,
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
dev_info->rx_queue_offload_capa);
dev_info->tx_queue_offload_capa = txgbe_get_tx_queue_offloads(dev);
dev_info->tx_offload_capa = txgbe_get_tx_port_offloads(dev);
dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -579,22 +579,22 @@ txgbevf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
dev->data->port_id);
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/*
* VF has no ability to enable/disable HW CRC
* Keep the persistent behavior the same as Host PF
*/
#ifndef RTE_LIBRTE_TXGBE_PF_DISABLE_STRIP_CRC
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
- conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#else
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#endif
@@ -652,8 +652,8 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
txgbevf_set_vfta_all(dev, 1);
/* Set HW strip */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = txgbevf_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -896,10 +896,10 @@ txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
int on = 0;
/* VF function only support hw strip feature, others are not support */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- on = !!(rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ on = !!(rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
txgbevf_vlan_strip_queue_set(dev, i, on);
}
}
diff --git a/drivers/net/txgbe/txgbe_fdir.c b/drivers/net/txgbe/txgbe_fdir.c
index 8abb86228608..e303d87176ed 100644
--- a/drivers/net/txgbe/txgbe_fdir.c
+++ b/drivers/net/txgbe/txgbe_fdir.c
@@ -102,22 +102,22 @@ txgbe_fdir_enable(struct txgbe_hw *hw, uint32_t fdirctrl)
* flexbytes matching field, and drop queue (only for perfect matching mode).
*/
static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf,
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf,
uint32_t *fdirctrl, uint32_t *flex)
{
*fdirctrl = 0;
*flex = 0;
switch (conf->pballoc) {
- case RTE_FDIR_PBALLOC_64K:
+ case RTE_ETH_FDIR_PBALLOC_64K:
/* 8k - 1 signature filters */
*fdirctrl |= TXGBE_FDIRCTL_BUF_64K;
break;
- case RTE_FDIR_PBALLOC_128K:
+ case RTE_ETH_FDIR_PBALLOC_128K:
/* 16k - 1 signature filters */
*fdirctrl |= TXGBE_FDIRCTL_BUF_128K;
break;
- case RTE_FDIR_PBALLOC_256K:
+ case RTE_ETH_FDIR_PBALLOC_256K:
/* 32k - 1 signature filters */
*fdirctrl |= TXGBE_FDIRCTL_BUF_256K;
break;
@@ -521,15 +521,15 @@ txgbe_atr_compute_hash(struct txgbe_atr_input *atr_input,
static uint32_t
atr_compute_perfect_hash(struct txgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
uint32_t bucket_hash;
bucket_hash = txgbe_atr_compute_hash(input,
TXGBE_ATR_BUCKET_HASH_KEY);
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
bucket_hash &= PERFECT_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
bucket_hash &= PERFECT_BUCKET_128KB_HASH_MASK;
else
bucket_hash &= PERFECT_BUCKET_64KB_HASH_MASK;
@@ -564,15 +564,15 @@ txgbe_fdir_check_cmd_complete(struct txgbe_hw *hw, uint32_t *fdircmd)
*/
static uint32_t
atr_compute_signature_hash(struct txgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
uint32_t bucket_hash, sig_hash;
bucket_hash = txgbe_atr_compute_hash(input,
TXGBE_ATR_BUCKET_HASH_KEY);
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
bucket_hash &= SIG_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
bucket_hash &= SIG_BUCKET_128KB_HASH_MASK;
else
bucket_hash &= SIG_BUCKET_64KB_HASH_MASK;
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index eae400b14176..6d7fd1842843 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -1215,7 +1215,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
return -rte_errno;
}
- filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+ filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
/**
* grp and e_cid_base are bit fields and only use 14 bits.
* e-tag id is taken as little endian by HW.
diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c
index ccd747973ba2..445733f3ba46 100644
--- a/drivers/net/txgbe/txgbe_ipsec.c
+++ b/drivers/net/txgbe/txgbe_ipsec.c
@@ -372,7 +372,7 @@ txgbe_crypto_create_session(void *device,
aead_xform = &conf->crypto_xform->aead;
if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
ic_session->op = TXGBE_OP_AUTHENTICATED_DECRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -380,7 +380,7 @@ txgbe_crypto_create_session(void *device,
return -ENOTSUP;
}
} else {
- if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
ic_session->op = TXGBE_OP_AUTHENTICATED_ENCRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -611,11 +611,11 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
tx_offloads = dev->data->dev_conf.txmode.offloads;
/* sanity checks */
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
return -1;
}
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
return -1;
}
@@ -634,7 +634,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
reg |= TXGBE_SECRXCTL_CRCSTRIP;
wr32(hw, TXGBE_SECRXCTL, reg);
- if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
wr32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA, 0);
reg = rd32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA);
if (reg != 0) {
@@ -642,7 +642,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
return -1;
}
}
- if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
wr32(hw, TXGBE_SECTXCTL, TXGBE_SECTXCTL_STFWD);
reg = rd32(hw, TXGBE_SECTXCTL);
if (reg != TXGBE_SECTXCTL_STFWD) {
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index 494d779a3c9d..44f6f103edd2 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -103,15 +103,15 @@ int txgbe_pf_host_init(struct rte_eth_dev *eth_dev)
memset(uta_info, 0, sizeof(struct txgbe_uta_info));
hw->mac.mc_filter_type = 0;
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
nb_queue = 2;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
- } else if (vf_num >= ETH_16_POOLS) {
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
nb_queue = 4;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
} else {
nb_queue = 8;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
}
RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -258,13 +258,13 @@ int txgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
gcr_ext &= ~TXGBE_PORTCTL_NUMVT_MASK;
switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
gcr_ext |= TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
gcr_ext |= TXGBE_PORTCTL_NUMVT_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
gcr_ext |= TXGBE_PORTCTL_NUMVT_16;
break;
}
@@ -613,29 +613,29 @@ txgbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
/* Notify VF of number of DCB traffic classes */
eth_conf = ð_dev->data->dev_conf;
switch (eth_conf->txmode.mq_mode) {
- case ETH_MQ_TX_NONE:
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_NONE:
+ case RTE_ETH_MQ_TX_DCB:
PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
", but its tx mode = %d\n", vf,
eth_conf->txmode.mq_mode);
return -1;
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
vmdq_dcb_tx_conf = ð_conf->tx_adv_conf.vmdq_dcb_tx_conf;
switch (vmdq_dcb_tx_conf->nb_queue_pools) {
- case ETH_16_POOLS:
- num_tcs = ETH_8_TCS;
+ case RTE_ETH_16_POOLS:
+ num_tcs = RTE_ETH_8_TCS;
break;
- case ETH_32_POOLS:
- num_tcs = ETH_4_TCS;
+ case RTE_ETH_32_POOLS:
+ num_tcs = RTE_ETH_4_TCS;
break;
default:
return -1;
}
break;
- /* ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
- case ETH_MQ_TX_VMDQ_ONLY:
+ /* RTE_ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
+ case RTE_ETH_MQ_TX_VMDQ_ONLY:
hw = TXGBE_DEV_HW(eth_dev);
vmvir = rd32(hw, TXGBE_POOLTAG(vf));
vlana = vmvir & TXGBE_POOLTAG_ACT_MASK;
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1bd..c302d49af728 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1939,7 +1939,7 @@ txgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
uint64_t
txgbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
{
- return DEV_RX_OFFLOAD_VLAN_STRIP;
+ return RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
uint64_t
@@ -1949,35 +1949,35 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_eth_dev_sriov *sriov = &RTE_ETH_DEV_SRIOV(dev);
- offloads = DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_SCATTER;
+ offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
if (!txgbe_is_vf(dev))
- offloads |= (DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_VLAN_EXTEND);
+ offloads |= (RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
/*
* RSC is only supported by PF devices in a non-SR-IOV
* mode.
*/
if (hw->mac.type == txgbe_mac_raptor && !sriov->active)
- offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
if (hw->mac.type == txgbe_mac_raptor)
- offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
- offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- offloads |= DEV_RX_OFFLOAD_SECURITY;
+ offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
#endif
return offloads;
@@ -2202,32 +2202,32 @@ txgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
uint64_t tx_offload_capa;
tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_UDP_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
if (!txgbe_is_vf(dev))
- tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
- tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
- tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
#endif
return tx_offload_capa;
}
@@ -2329,7 +2329,7 @@ txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->tx_deferred_start = tx_conf->tx_deferred_start;
#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SECURITY);
+ RTE_ETH_TX_OFFLOAD_SECURITY);
#endif
/* Modification to set tail pointer for virtual function
@@ -2579,7 +2579,7 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -2880,20 +2880,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
if (hw->mac.type == txgbe_mac_raptor_vf) {
mrqc = rd32(hw, TXGBE_VFPLCFG);
mrqc &= ~TXGBE_VFPLCFG_RSSMASK;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= TXGBE_VFPLCFG_RSSIPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= TXGBE_VFPLCFG_RSSIPV4TCP;
- if (rss_hf & ETH_RSS_IPV6 ||
- rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6 ||
+ rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= TXGBE_VFPLCFG_RSSIPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
- rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+ rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= TXGBE_VFPLCFG_RSSIPV6TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= TXGBE_VFPLCFG_RSSIPV4UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
- rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+ rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= TXGBE_VFPLCFG_RSSIPV6UDP;
if (rss_hf)
@@ -2910,20 +2910,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
} else {
mrqc = rd32(hw, TXGBE_RACTL);
mrqc &= ~TXGBE_RACTL_RSSMASK;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= TXGBE_RACTL_RSSIPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= TXGBE_RACTL_RSSIPV4TCP;
- if (rss_hf & ETH_RSS_IPV6 ||
- rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6 ||
+ rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= TXGBE_RACTL_RSSIPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
- rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+ rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= TXGBE_RACTL_RSSIPV6TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= TXGBE_RACTL_RSSIPV4UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
- rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+ rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= TXGBE_RACTL_RSSIPV6UDP;
if (rss_hf)
@@ -2964,39 +2964,39 @@ txgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
if (hw->mac.type == txgbe_mac_raptor_vf) {
mrqc = rd32(hw, TXGBE_VFPLCFG);
if (mrqc & TXGBE_VFPLCFG_RSSIPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & TXGBE_VFPLCFG_RSSIPV4TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & TXGBE_VFPLCFG_RSSIPV6)
- rss_hf |= ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX;
if (mrqc & TXGBE_VFPLCFG_RSSIPV6TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & TXGBE_VFPLCFG_RSSIPV4UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & TXGBE_VFPLCFG_RSSIPV6UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
if (!(mrqc & TXGBE_VFPLCFG_RSSENA))
rss_hf = 0;
} else {
mrqc = rd32(hw, TXGBE_RACTL);
if (mrqc & TXGBE_RACTL_RSSIPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & TXGBE_RACTL_RSSIPV4TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & TXGBE_RACTL_RSSIPV6)
- rss_hf |= ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX;
if (mrqc & TXGBE_RACTL_RSSIPV6TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & TXGBE_RACTL_RSSIPV4UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & TXGBE_RACTL_RSSIPV6UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
if (!(mrqc & TXGBE_RACTL_RSSENA))
rss_hf = 0;
}
@@ -3026,7 +3026,7 @@ txgbe_rss_configure(struct rte_eth_dev *dev)
*/
if (adapter->rss_reta_updated == 0) {
reta = 0;
- for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+ for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
if (j == dev->data->nb_rx_queues)
j = 0;
reta = (reta >> 8) | LS32(j, 24, 0xFF);
@@ -3063,12 +3063,12 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
num_pools = cfg->nb_queue_pools;
/* Check we have a valid number of pools */
- if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+ if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
txgbe_rss_disable(dev);
return;
}
/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
- nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+ nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
/*
* split rx buffer up into sections, each for 1 traffic class
@@ -3083,7 +3083,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
}
/* zero alloc all unused TCs */
- for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
uint32_t rxpbsize = rd32(hw, TXGBE_PBRXSIZE(i));
rxpbsize &= (~(0x3FF << 10));
@@ -3091,7 +3091,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
}
- if (num_pools == ETH_16_POOLS) {
+ if (num_pools == RTE_ETH_16_POOLS) {
mrqc = TXGBE_PORTCTL_NUMTC_8;
mrqc |= TXGBE_PORTCTL_NUMVT_16;
} else {
@@ -3110,7 +3110,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_POOLCTL, vt_ctl);
queue_mapping = 0;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
/*
* mapping is done with 3 bits per priority,
* so shift by i*3 each time
@@ -3131,7 +3131,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_VLANTBL(i), 0xFFFFFFFF);
wr32(hw, TXGBE_POOLRXENA(0),
- num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
wr32(hw, TXGBE_ETHADDRIDX, 0);
wr32(hw, TXGBE_ETHADDRASSL, 0xFFFFFFFF);
@@ -3201,7 +3201,7 @@ txgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
/*PF VF Transmit Enable*/
wr32(hw, TXGBE_POOLTXENA(0),
vmdq_tx_conf->nb_queue_pools ==
- ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
/*Configure general DCB TX parameters*/
txgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3217,12 +3217,12 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
- if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3232,7 +3232,7 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3250,12 +3250,12 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
- if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3265,7 +3265,7 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3292,7 +3292,7 @@ txgbe_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3319,7 +3319,7 @@ txgbe_dcb_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3455,7 +3455,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(dev);
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
dcb_config->vt_mode = true;
config_dcb_rx = DCB_RX_CONFIG;
/*
@@ -3466,8 +3466,8 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
/*Configure general VMDQ and DCB RX parameters*/
txgbe_vmdq_dcb_configure(dev);
break;
- case ETH_MQ_RX_DCB:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_DCB_RSS:
dcb_config->vt_mode = false;
config_dcb_rx = DCB_RX_CONFIG;
/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -3480,7 +3480,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
break;
}
switch (dev->data->dev_conf.txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
dcb_config->vt_mode = true;
config_dcb_tx = DCB_TX_CONFIG;
/* get DCB and VT TX configuration parameters
@@ -3491,7 +3491,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
txgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
break;
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_DCB:
dcb_config->vt_mode = false;
config_dcb_tx = DCB_TX_CONFIG;
/* get DCB TX configuration parameters from rte_eth_conf */
@@ -3507,15 +3507,15 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
nb_tcs = dcb_config->num_tcs.pfc_tcs;
/* Unpack map */
txgbe_dcb_unpack_map_cee(dcb_config, TXGBE_DCB_RX_CONFIG, map);
- if (nb_tcs == ETH_4_TCS) {
+ if (nb_tcs == RTE_ETH_4_TCS) {
/* Avoid un-configured priority mapping to TC0 */
uint8_t j = 4;
uint8_t mask = 0xFF;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
mask = (uint8_t)(mask & (~(1 << map[i])));
for (i = 0; mask && (i < TXGBE_DCB_TC_MAX); i++) {
- if ((mask & 0x1) && j < ETH_DCB_NUM_USER_PRIORITIES)
+ if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
map[j++] = i;
mask >>= 1;
}
@@ -3556,7 +3556,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
/* zero alloc all unused TCs */
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
wr32(hw, TXGBE_PBRXSIZE(i), 0);
}
if (config_dcb_tx) {
@@ -3572,7 +3572,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
wr32(hw, TXGBE_PBTXDMATH(i), txpbthresh);
}
/* Clear unused TCs, if any, to zero buffer size*/
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
wr32(hw, TXGBE_PBTXSIZE(i), 0);
wr32(hw, TXGBE_PBTXDMATH(i), 0);
}
@@ -3614,7 +3614,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
txgbe_dcb_config_tc_stats_raptor(hw, dcb_config);
/* Check if the PFC is supported */
- if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
for (i = 0; i < nb_tcs; i++) {
/* If the TC count is 8,
@@ -3628,7 +3628,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
tc->pfc = txgbe_dcb_pfc_enabled;
}
txgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
- if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+ if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
pfc_en &= 0x0F;
ret = txgbe_dcb_config_pfc(hw, pfc_en, map);
}
@@ -3699,12 +3699,12 @@ void txgbe_configure_dcb(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* check support mq_mode for DCB */
- if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB &&
- dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB &&
- dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS)
+ if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
return;
- if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+ if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
return;
/** Configure DCB hardware **/
@@ -3760,7 +3760,7 @@ txgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
/* pool enabling for receive - 64 */
wr32(hw, TXGBE_POOLRXENA(0), UINT32_MAX);
- if (num_pools == ETH_64_POOLS)
+ if (num_pools == RTE_ETH_64_POOLS)
wr32(hw, TXGBE_POOLRXENA(1), UINT32_MAX);
/*
@@ -3884,11 +3884,11 @@ txgbe_config_vf_rss(struct rte_eth_dev *dev)
mrqc = rd32(hw, TXGBE_PORTCTL);
mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_32;
break;
@@ -3911,15 +3911,15 @@ txgbe_config_vf_default(struct rte_eth_dev *dev)
mrqc = rd32(hw, TXGBE_PORTCTL);
mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_16;
break;
default:
@@ -3942,21 +3942,21 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* any DCB/RSS w/o VMDq multi-queue setting
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_DCB_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
txgbe_rss_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
txgbe_vmdq_dcb_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
txgbe_vmdq_rx_hw_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
default:
/* if mq_mode is none, disable rss mode.*/
txgbe_rss_disable(dev);
@@ -3967,18 +3967,18 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* Support RSS together with SRIOV.
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
txgbe_config_vf_rss(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
- case ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_DCB:
/* In SRIOV, the configuration is the same as VMDq case */
txgbe_vmdq_dcb_configure(dev);
break;
/* DCB/RSS together with SRIOV is not supported */
- case ETH_MQ_RX_VMDQ_DCB_RSS:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
PMD_INIT_LOG(ERR,
"Could not support DCB/RSS with VMDq & SRIOV");
return -1;
@@ -4008,7 +4008,7 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV inactive scheme
* any DCB w/o VMDq multi-queue setting
*/
- if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+ if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
txgbe_vmdq_tx_hw_configure(hw);
else
wr32m(hw, TXGBE_PORTCTL, TXGBE_PORTCTL_NUMVT_MASK, 0);
@@ -4018,13 +4018,13 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV active scheme
* FIXME if support DCB together with VMDq & SRIOV
*/
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mtqc = TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mtqc = TXGBE_PORTCTL_NUMVT_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
mtqc = TXGBE_PORTCTL_NUMVT_16;
break;
default:
@@ -4087,10 +4087,10 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
/* Sanity check */
dev->dev_ops->dev_infos_get(dev, &dev_info);
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
rsc_capable = true;
- if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
"support it");
return -EINVAL;
@@ -4098,22 +4098,22 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
/* RSC global configuration */
- if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
- (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+ (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
PMD_INIT_LOG(CRIT, "LRO can't be enabled when HW CRC "
"is disabled");
return -EINVAL;
}
rfctl = rd32(hw, TXGBE_PSRCTL);
- if (rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if (rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
rfctl &= ~TXGBE_PSRCTL_RSCDIA;
else
rfctl |= TXGBE_PSRCTL_RSCDIA;
wr32(hw, TXGBE_PSRCTL, rfctl);
/* If LRO hasn't been requested - we are done here. */
- if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
return 0;
/* Set PSRCTL.RSCACK bit */
@@ -4253,7 +4253,7 @@ txgbe_set_rx_function(struct rte_eth_dev *dev)
struct txgbe_rx_queue *rxq = dev->data->rx_queues[i];
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SECURITY);
+ RTE_ETH_RX_OFFLOAD_SECURITY);
}
#endif
}
@@ -4296,7 +4296,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
* Configure CRC stripping, if any.
*/
hlreg0 = rd32(hw, TXGBE_SECRXCTL);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
hlreg0 &= ~TXGBE_SECRXCTL_CRCSTRIP;
else
hlreg0 |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4305,7 +4305,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
TXGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
} else {
@@ -4329,7 +4329,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Setup RX queues */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -4339,7 +4339,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -4376,11 +4376,11 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
2 * TXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
- if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
dev->data->scattered_rx = 1;
/*
@@ -4395,7 +4395,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
*/
rxcsum = rd32(hw, TXGBE_PSRCTL);
rxcsum |= TXGBE_PSRCTL_PCSD;
- if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= TXGBE_PSRCTL_L4CSUM;
else
rxcsum &= ~TXGBE_PSRCTL_L4CSUM;
@@ -4404,7 +4404,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
if (hw->mac.type == txgbe_mac_raptor) {
rdrxctl = rd32(hw, TXGBE_SECRXCTL);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rdrxctl &= ~TXGBE_SECRXCTL_CRCSTRIP;
else
rdrxctl |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4527,8 +4527,8 @@ txgbe_dev_rxtx_start(struct rte_eth_dev *dev)
txgbe_setup_loopback_link_raptor(hw);
#ifdef RTE_LIB_SECURITY
- if ((dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) ||
- (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_SECURITY)) {
+ if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) ||
+ (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY)) {
ret = txgbe_crypto_enable_ipsec(dev);
if (ret != 0) {
PMD_DRV_LOG(ERR,
@@ -4836,7 +4836,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Set PSR type for VF RSS according to max Rx queue */
psrtype = TXGBE_VFPLCFG_PSRL4HDR |
@@ -4888,7 +4888,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
*/
wr32(hw, TXGBE_RXCFG(i), srrctl);
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
(rxmode->max_rx_pkt_len +
2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
@@ -4897,8 +4897,8 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
dev->data->scattered_rx = 1;
}
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
/*
@@ -5069,7 +5069,7 @@ txgbe_config_rss_filter(struct rte_eth_dev *dev,
* little-endian order.
*/
reta = 0;
- for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+ for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
if (j == conf->conf.queue_num)
j = 0;
reta = (reta >> 8) | LS32(conf->conf.queue[j], 24, 0xFF);
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index b96f58a3f848..27d4c842c0e7 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -309,7 +309,7 @@ struct txgbe_rx_queue {
uint8_t rx_deferred_start; /**< not in global dev start. */
/** flags to set in mbuf when a vlan is detected. */
uint64_t vlan_flags;
- uint64_t offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
struct rte_mbuf fake_mbuf;
/** hold packets to return to application */
@@ -392,7 +392,7 @@ struct txgbe_tx_queue {
uint8_t pthresh; /**< Prefetch threshold register. */
uint8_t hthresh; /**< Host threshold register. */
uint8_t wthresh; /**< Write-back threshold reg. */
- uint64_t offloads; /* Tx offload flags of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /* Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
uint32_t ctx_curr; /**< Hardware context states. */
/** Hardware context0 history. */
struct txgbe_ctx_info ctx_cache[TXGBE_CTX_NUM];
diff --git a/drivers/net/txgbe/txgbe_tm.c b/drivers/net/txgbe/txgbe_tm.c
index 3abe3959eb1a..3171be73d05d 100644
--- a/drivers/net/txgbe/txgbe_tm.c
+++ b/drivers/net/txgbe/txgbe_tm.c
@@ -118,14 +118,14 @@ txgbe_tc_nb_get(struct rte_eth_dev *dev)
uint8_t nb_tcs = 0;
eth_conf = &dev->data->dev_conf;
- if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
- } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ } else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
- ETH_32_POOLS)
- nb_tcs = ETH_4_TCS;
+ RTE_ETH_32_POOLS)
+ nb_tcs = RTE_ETH_4_TCS;
else
- nb_tcs = ETH_8_TCS;
+ nb_tcs = RTE_ETH_8_TCS;
} else {
nb_tcs = 1;
}
@@ -364,10 +364,10 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
if (vf_num) {
/* no DCB */
if (nb_tcs == 1) {
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
*nb = 2;
*base = vf_num * 2;
- } else if (vf_num >= ETH_16_POOLS) {
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
*nb = 4;
*base = vf_num * 4;
} else {
@@ -381,7 +381,7 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
}
} else {
/* VT off */
- if (nb_tcs == ETH_8_TCS) {
+ if (nb_tcs == RTE_ETH_8_TCS) {
switch (tc_node_no) {
case 0:
*base = 0;
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9aed..778460aab5e1 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -125,8 +125,8 @@ static pthread_mutex_t internal_list_lock = PTHREAD_MUTEX_INITIALIZER;
static struct rte_eth_link pmd_link = {
.link_speed = 10000,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN
};
struct rte_vhost_vring_state {
@@ -823,7 +823,7 @@ new_device(int vid)
rte_vhost_get_mtu(vid, ð_dev->data->mtu);
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
rte_atomic32_set(&internal->dev_attached, 1);
update_queuing_status(eth_dev);
@@ -858,7 +858,7 @@ destroy_device(int vid)
rte_atomic32_set(&internal->dev_attached, 0);
update_queuing_status(eth_dev);
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
if (eth_dev->data->rx_queues && eth_dev->data->tx_queues) {
for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1124,7 +1124,7 @@ eth_dev_configure(struct rte_eth_dev *dev)
if (vhost_driver_setup(dev) < 0)
return -1;
- internal->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ internal->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
return 0;
}
@@ -1273,9 +1273,9 @@ eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_tx_queues = internal->max_queues;
dev_info->min_rx_bufsize = 0;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
return 0;
}
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index e58085a2c95a..00bbbb2b3537 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -703,7 +703,7 @@ int
virtio_dev_close(struct rte_eth_dev *dev)
{
struct virtio_hw *hw = dev->data->dev_private;
- struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+ struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
PMD_INIT_LOG(DEBUG, "virtio_dev_close");
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1763,7 +1763,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
hw->mac_addr[0], hw->mac_addr[1], hw->mac_addr[2],
hw->mac_addr[3], hw->mac_addr[4], hw->mac_addr[5]);
- if (hw->speed == ETH_SPEED_NUM_UNKNOWN) {
+ if (hw->speed == RTE_ETH_SPEED_NUM_UNKNOWN) {
if (virtio_with_feature(hw, VIRTIO_NET_F_SPEED_DUPLEX)) {
config = &local_config;
virtio_read_dev_config(hw,
@@ -1777,7 +1777,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
}
}
if (hw->duplex == DUPLEX_UNKNOWN)
- hw->duplex = ETH_LINK_FULL_DUPLEX;
+ hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_INIT_LOG(DEBUG, "link speed = %d, duplex = %d",
hw->speed, hw->duplex);
if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ)) {
@@ -1876,7 +1876,7 @@ int
eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
{
struct virtio_hw *hw = eth_dev->data->dev_private;
- uint32_t speed = ETH_SPEED_NUM_UNKNOWN;
+ uint32_t speed = RTE_ETH_SPEED_NUM_UNKNOWN;
int vectorized = 0;
int ret;
@@ -1948,22 +1948,22 @@ static uint32_t
virtio_dev_speed_capa_get(uint32_t speed)
{
switch (speed) {
- case ETH_SPEED_NUM_10G:
- return ETH_LINK_SPEED_10G;
- case ETH_SPEED_NUM_20G:
- return ETH_LINK_SPEED_20G;
- case ETH_SPEED_NUM_25G:
- return ETH_LINK_SPEED_25G;
- case ETH_SPEED_NUM_40G:
- return ETH_LINK_SPEED_40G;
- case ETH_SPEED_NUM_50G:
- return ETH_LINK_SPEED_50G;
- case ETH_SPEED_NUM_56G:
- return ETH_LINK_SPEED_56G;
- case ETH_SPEED_NUM_100G:
- return ETH_LINK_SPEED_100G;
- case ETH_SPEED_NUM_200G:
- return ETH_LINK_SPEED_200G;
+ case RTE_ETH_SPEED_NUM_10G:
+ return RTE_ETH_LINK_SPEED_10G;
+ case RTE_ETH_SPEED_NUM_20G:
+ return RTE_ETH_LINK_SPEED_20G;
+ case RTE_ETH_SPEED_NUM_25G:
+ return RTE_ETH_LINK_SPEED_25G;
+ case RTE_ETH_SPEED_NUM_40G:
+ return RTE_ETH_LINK_SPEED_40G;
+ case RTE_ETH_SPEED_NUM_50G:
+ return RTE_ETH_LINK_SPEED_50G;
+ case RTE_ETH_SPEED_NUM_56G:
+ return RTE_ETH_LINK_SPEED_56G;
+ case RTE_ETH_SPEED_NUM_100G:
+ return RTE_ETH_LINK_SPEED_100G;
+ case RTE_ETH_SPEED_NUM_200G:
+ return RTE_ETH_LINK_SPEED_200G;
default:
return 0;
}
@@ -2079,14 +2079,14 @@ virtio_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "configure");
req_features = VIRTIO_PMD_DEFAULT_GUEST_FEATURES;
- if (rxmode->mq_mode != ETH_MQ_RX_NONE) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
PMD_DRV_LOG(ERR,
"Unsupported Rx multi queue mode %d",
rxmode->mq_mode);
return -EINVAL;
}
- if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
PMD_DRV_LOG(ERR,
"Unsupported Tx multi queue mode %d",
txmode->mq_mode);
@@ -2104,20 +2104,20 @@ virtio_dev_configure(struct rte_eth_dev *dev)
hw->max_rx_pkt_len = rxmode->max_rx_pkt_len;
- if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM))
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
req_features |= (1ULL << VIRTIO_NET_F_GUEST_CSUM);
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
req_features |=
(1ULL << VIRTIO_NET_F_GUEST_TSO4) |
(1ULL << VIRTIO_NET_F_GUEST_TSO6);
- if (tx_offloads & (DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM))
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM))
req_features |= (1ULL << VIRTIO_NET_F_CSUM);
- if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)
req_features |=
(1ULL << VIRTIO_NET_F_HOST_TSO4) |
(1ULL << VIRTIO_NET_F_HOST_TSO6);
@@ -2129,15 +2129,15 @@ virtio_dev_configure(struct rte_eth_dev *dev)
return ret;
}
- if ((rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM)) &&
+ if ((rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) &&
!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_CSUM)) {
PMD_DRV_LOG(ERR,
"rx checksum not available on this host");
return -ENOTSUP;
}
- if ((rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) &&
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
(!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO4) ||
!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO6))) {
PMD_DRV_LOG(ERR,
@@ -2149,12 +2149,12 @@ virtio_dev_configure(struct rte_eth_dev *dev)
if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
virtio_dev_cq_start(dev);
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
hw->vlan_strip = 1;
- hw->rx_ol_scatter = (rx_offloads & DEV_RX_OFFLOAD_SCATTER);
+ hw->rx_ol_scatter = (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
- if ((rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
PMD_DRV_LOG(ERR,
"vlan filtering not available on this host");
@@ -2207,7 +2207,7 @@ virtio_dev_configure(struct rte_eth_dev *dev)
hw->use_vec_rx = 0;
}
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
PMD_DRV_LOG(INFO,
"disabled packed ring vectorized rx for TCP_LRO enabled");
hw->use_vec_rx = 0;
@@ -2234,10 +2234,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
hw->use_vec_rx = 0;
}
- if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_VLAN_STRIP)) {
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP)) {
PMD_DRV_LOG(INFO,
"disabled split ring vectorized rx for offloading enabled");
hw->use_vec_rx = 0;
@@ -2401,7 +2401,7 @@ virtio_dev_stop(struct rte_eth_dev *dev)
{
struct virtio_hw *hw = dev->data->dev_private;
struct rte_eth_link link;
- struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+ struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
PMD_INIT_LOG(DEBUG, "stop");
dev->data->dev_started = 0;
@@ -2440,28 +2440,28 @@ virtio_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complet
memset(&link, 0, sizeof(link));
link.link_duplex = hw->duplex;
link.link_speed = hw->speed;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
if (!hw->started) {
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
} else if (virtio_with_feature(hw, VIRTIO_NET_F_STATUS)) {
PMD_INIT_LOG(DEBUG, "Get link status from hw");
virtio_read_dev_config(hw,
offsetof(struct virtio_net_config, status),
&status, sizeof(status));
if ((status & VIRTIO_NET_S_LINK_UP) == 0) {
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
PMD_INIT_LOG(DEBUG, "Port %d is down",
dev->data->port_id);
} else {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
PMD_INIT_LOG(DEBUG, "Port %d is up",
dev->data->port_id);
}
} else {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -2474,8 +2474,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct virtio_hw *hw = dev->data->dev_private;
uint64_t offloads = rxmode->offloads;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if ((offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if ((offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
PMD_DRV_LOG(NOTICE,
@@ -2485,8 +2485,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
}
- if (mask & ETH_VLAN_STRIP_MASK)
- hw->vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ if (mask & RTE_ETH_VLAN_STRIP_MASK)
+ hw->vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
return 0;
}
@@ -2508,33 +2508,33 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mtu = hw->max_mtu;
host_features = VIRTIO_OPS(hw)->get_features(hw);
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SCATTER;
if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
dev_info->rx_offload_capa |=
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM;
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
}
if (host_features & (1ULL << VIRTIO_NET_F_CTRL_VLAN))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
tso_mask = (1ULL << VIRTIO_NET_F_GUEST_TSO4) |
(1ULL << VIRTIO_NET_F_GUEST_TSO6);
if ((host_features & tso_mask) == tso_mask)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_LRO;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if (host_features & (1ULL << VIRTIO_NET_F_CSUM)) {
dev_info->tx_offload_capa |=
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
}
tso_mask = (1ULL << VIRTIO_NET_F_HOST_TSO4) |
(1ULL << VIRTIO_NET_F_HOST_TSO6);
if ((host_features & tso_mask) == tso_mask)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
return 0;
}
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 1a3291273a11..825a6adfc2b1 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -41,21 +41,21 @@
#define VMXNET3_TX_MAX_SEG UINT8_MAX
#define VMXNET3_TX_OFFLOAD_CAP \
- (DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define VMXNET3_RX_OFFLOAD_CAP \
- (DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_LRO | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_LRO | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
int vmxnet3_segs_dynfield_offset = -1;
@@ -399,9 +399,9 @@ eth_vmxnet3_dev_init(struct rte_eth_dev *eth_dev)
/* set the initial link status */
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_10G;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
rte_eth_linkstatus_set(eth_dev, &link);
return 0;
@@ -487,8 +487,8 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (dev->data->nb_tx_queues > VMXNET3_MAX_TX_QUEUES ||
dev->data->nb_rx_queues > VMXNET3_MAX_RX_QUEUES) {
@@ -548,7 +548,7 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
hw->queueDescPA = mz->iova;
hw->queue_desc_len = (uint16_t)size;
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
/* Allocate memory structure for UPT1_RSSConf and configure */
mz = gpa_zone_reserve(dev, sizeof(struct VMXNET3_RSSConf),
"rss_conf", rte_socket_id(),
@@ -844,15 +844,15 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
devRead->rxFilterConf.rxMode = 0;
/* Setting up feature flags */
- if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
devRead->misc.uptFeatures |= VMXNET3_F_RXCSUM;
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
devRead->misc.uptFeatures |= VMXNET3_F_LRO;
devRead->misc.maxNumRxSG = 0;
}
- if (port_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (port_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
ret = vmxnet3_rss_configure(dev);
if (ret != VMXNET3_SUCCESS)
return ret;
@@ -864,7 +864,7 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
}
ret = vmxnet3_dev_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
if (ret)
return ret;
@@ -931,7 +931,7 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
}
if (VMXNET3_VERSION_GE_4(hw) &&
- dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
/* Check for additional RSS */
ret = vmxnet3_v4_rss_configure(dev);
if (ret != VMXNET3_SUCCESS) {
@@ -1040,9 +1040,9 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
/* Clear recorded link status */
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_10G;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
rte_eth_linkstatus_set(dev, &link);
hw->adapter_stopped = 1;
@@ -1372,7 +1372,7 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev,
dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
dev_info->min_mtu = VMXNET3_MIN_MTU;
dev_info->max_mtu = VMXNET3_MAX_MTU;
- dev_info->speed_capa = ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
dev_info->flow_type_rss_offloads = VMXNET3_RSS_OFFLOAD_ALL;
@@ -1454,10 +1454,10 @@ __vmxnet3_dev_link_update(struct rte_eth_dev *dev,
ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD);
if (ret & 0x1)
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_10G;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -1510,7 +1510,7 @@ vmxnet3_dev_promiscuous_disable(struct rte_eth_dev *dev)
uint32_t *vf_table = hw->shared->devRead.rxFilterConf.vfTable;
uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
else
memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
@@ -1580,8 +1580,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
uint32_t *vf_table = devRead->rxFilterConf.vfTable;
uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
devRead->misc.uptFeatures |= UPT1_F_RXVLAN;
else
devRead->misc.uptFeatures &= ~UPT1_F_RXVLAN;
@@ -1590,8 +1590,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
VMXNET3_CMD_UPDATE_FEATURE);
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
else
memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.h b/drivers/net/vmxnet3/vmxnet3_ethdev.h
index 59bee9723cfc..7588ba929b65 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.h
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.h
@@ -32,18 +32,18 @@
VMXNET3_MAX_RX_QUEUES + 1)
#define VMXNET3_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
#define VMXNET3_V4_RSS_MASK ( \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define VMXNET3_MANDATORY_V4_RSS ( \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP)
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
/* RSS configuration structure - shared with device through GPA */
typedef struct VMXNET3_RSSConf {
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 5cf53d4de825..0f2671f528f4 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -1326,13 +1326,13 @@ vmxnet3_v4_rss_configure(struct rte_eth_dev *dev)
rss_hf = port_rss_conf->rss_hf &
(VMXNET3_V4_RSS_MASK | VMXNET3_RSS_OFFLOAD_ALL);
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP6;
VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD,
@@ -1389,13 +1389,13 @@ vmxnet3_rss_configure(struct rte_eth_dev *dev)
/* loading hashType */
dev_rss_conf->hashType = 0;
rss_hf = port_rss_conf->rss_hf & VMXNET3_RSS_OFFLOAD_ALL;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV4;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV6;
return VMXNET3_SUCCESS;
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index 5251db0b1674..ecc6ef2965ee 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -71,12 +71,12 @@ mbuf_input(struct rte_mbuf *mbuf)
static const struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -334,7 +334,7 @@ check_port_link_status(uint16_t port_id)
if (link_get_err >= 0 && link.link_status) {
const char *dp = (link.link_duplex ==
- ETH_LINK_FULL_DUPLEX) ?
+ RTE_ETH_LINK_FULL_DUPLEX) ?
"full-duplex" : "half-duplex";
printf("\nPort %u Link Up - speed %s - %s\n",
port_id,
diff --git a/examples/bond/main.c b/examples/bond/main.c
index f48400e21156..e4c627e203a4 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -116,18 +116,18 @@ static struct rte_mempool *mbuf_pool;
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -151,9 +151,9 @@ slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-retval));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
@@ -243,9 +243,9 @@ bond_port_init(struct rte_mempool *mbuf_pool)
"Error during getting device (port %u) info: %s\n",
BOND_PORT, strerror(-retval));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
retval = rte_eth_dev_configure(BOND_PORT, 1, 1, &local_port_conf);
if (retval != 0)
rte_exit(EXIT_FAILURE, "port %u: configuration failed (res=%d)\n",
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index 1b1029660e77..e6af8420e4c6 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -80,16 +80,16 @@ struct app_stats prev_app_stats;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.rx_adv_conf = {
.rss_conf = {
- .rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
- ETH_RSS_TCP | ETH_RSS_SCTP,
+ .rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
}
},
};
@@ -127,9 +127,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/ethtool/ethtool-app/main.c b/examples/ethtool/ethtool-app/main.c
index 21ed85c7d6c9..5053d174335c 100644
--- a/examples/ethtool/ethtool-app/main.c
+++ b/examples/ethtool/ethtool-app/main.c
@@ -98,7 +98,7 @@ static void setup_ports(struct app_config *app_cfg, int cnt_ports)
int ret;
memset(&cfg_port, 0, sizeof(cfg_port));
- cfg_port.txmode.mq_mode = ETH_MQ_TX_NONE;
+ cfg_port.txmode.mq_mode = RTE_ETH_MQ_TX_NONE;
for (idx_port = 0; idx_port < cnt_ports; idx_port++) {
struct app_port *ptr_port = &app_cfg->ports[idx_port];
diff --git a/examples/ethtool/lib/rte_ethtool.c b/examples/ethtool/lib/rte_ethtool.c
index 413251630709..e7cdf8d5775b 100644
--- a/examples/ethtool/lib/rte_ethtool.c
+++ b/examples/ethtool/lib/rte_ethtool.c
@@ -233,13 +233,13 @@ rte_ethtool_get_pauseparam(uint16_t port_id,
pause_param->tx_pause = 0;
pause_param->rx_pause = 0;
switch (fc_conf.mode) {
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
pause_param->rx_pause = 1;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
pause_param->tx_pause = 1;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
pause_param->rx_pause = 1;
pause_param->tx_pause = 1;
default:
@@ -277,14 +277,14 @@ rte_ethtool_set_pauseparam(uint16_t port_id,
if (pause_param->tx_pause) {
if (pause_param->rx_pause)
- fc_conf.mode = RTE_FC_FULL;
+ fc_conf.mode = RTE_ETH_FC_FULL;
else
- fc_conf.mode = RTE_FC_TX_PAUSE;
+ fc_conf.mode = RTE_ETH_FC_TX_PAUSE;
} else {
if (pause_param->rx_pause)
- fc_conf.mode = RTE_FC_RX_PAUSE;
+ fc_conf.mode = RTE_ETH_FC_RX_PAUSE;
else
- fc_conf.mode = RTE_FC_NONE;
+ fc_conf.mode = RTE_ETH_FC_NONE;
}
status = rte_eth_dev_flow_ctrl_set(port_id, &fc_conf);
@@ -398,12 +398,12 @@ rte_ethtool_net_set_rx_mode(uint16_t port_id)
for (vf = 0; vf < num_vfs; vf++) {
#ifdef RTE_NET_IXGBE
rte_pmd_ixgbe_set_vf_rxmode(port_id, vf,
- ETH_VMDQ_ACCEPT_UNTAG, 0);
+ RTE_ETH_VMDQ_ACCEPT_UNTAG, 0);
#endif
}
/* Enable Rx vlan filter, VF unspport status is discard */
- ret = rte_eth_dev_set_vlan_offload(port_id, ETH_VLAN_FILTER_MASK);
+ ret = rte_eth_dev_set_vlan_offload(port_id, RTE_ETH_VLAN_FILTER_MASK);
if (ret != 0)
return ret;
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index f70ab0cc9e38..3ac98add5692 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -283,14 +283,14 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
struct rte_eth_rxconf rx_conf;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
- .rss_hf = ETH_RSS_IP |
- ETH_RSS_TCP |
- ETH_RSS_UDP,
+ .rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_UDP,
}
}
};
@@ -312,12 +312,12 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_RSS_HASH)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_RSS_HASH)
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
rx_conf = dev_info.default_rxconf;
rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index ca6cd200caad..5780928d75ee 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -614,14 +614,14 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
struct rte_eth_rxconf rx_conf;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
- .rss_hf = ETH_RSS_IP |
- ETH_RSS_TCP |
- ETH_RSS_UDP,
+ .rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_UDP,
}
}
};
@@ -643,9 +643,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rx_conf = dev_info.default_rxconf;
rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index db71f5aa0401..f44ee65372ff 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -218,9 +218,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/flow_filtering/main.c b/examples/flow_filtering/main.c
index 29fb4b3d55ef..150406e385d4 100644
--- a/examples/flow_filtering/main.c
+++ b/examples/flow_filtering/main.c
@@ -113,7 +113,7 @@ assert_link_status(void)
memset(&link, 0, sizeof(link));
do {
link_get_err = rte_eth_link_get(port_id, &link);
- if (link_get_err == 0 && link.link_status == ETH_LINK_UP)
+ if (link_get_err == 0 && link.link_status == RTE_ETH_LINK_UP)
break;
rte_delay_ms(CHECK_INTERVAL);
} while (--rep_cnt);
@@ -121,7 +121,7 @@ assert_link_status(void)
if (link_get_err < 0)
rte_exit(EXIT_FAILURE, ":: error: link get is failing: %s\n",
rte_strerror(-link_get_err));
- if (link.link_status == ETH_LINK_DOWN)
+ if (link.link_status == RTE_ETH_LINK_DOWN)
rte_exit(EXIT_FAILURE, ":: error: link is still down\n");
}
@@ -138,12 +138,12 @@ init_port(void)
},
.txmode = {
.offloads =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO,
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO,
},
};
struct rte_eth_txconf txq_conf;
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index 0c413180f889..94e3ac91b299 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -819,13 +819,13 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
/* Configuring port to use RSS for multiple RX queues. 8< */
static const struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_PROTO_MASK,
+ .rss_hf = RTE_ETH_RSS_PROTO_MASK,
}
}
};
@@ -853,9 +853,9 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(portid, nb_queues, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Cannot configure device:"
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index f24536972084..aa41fcc1d037 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -148,14 +148,14 @@ static struct rte_eth_conf port_conf = {
.rxmode = {
.max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
.split_hdr_size = 0,
- .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ .offloads = (RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME),
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS),
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
},
};
@@ -624,7 +624,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 16bcffe356bc..f6ecd9b0fe3a 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -45,7 +45,7 @@ link_next(struct link *link)
static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
.split_hdr_size = 0, /* Header split buffer size */
},
@@ -57,12 +57,12 @@ static struct rte_eth_conf port_conf_default = {
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
-#define RETA_CONF_SIZE (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
static int
rss_setup(uint16_t port_id,
@@ -77,11 +77,11 @@ rss_setup(uint16_t port_id,
memset(reta_conf, 0, sizeof(reta_conf));
for (i = 0; i < reta_size; i++)
- reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+ reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
for (i = 0; i < reta_size; i++) {
- uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
- uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+ uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+ uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
uint32_t rss_qs_pos = i % rss->n_queues;
reta_conf[reta_id].reta[reta_pos] =
@@ -139,7 +139,7 @@ link_create(const char *name, struct link_params *params)
rss = params->rx.rss;
if (rss) {
if ((port_info.reta_size == 0) ||
- (port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+ (port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
return NULL;
if ((rss->n_queues == 0) ||
@@ -157,9 +157,9 @@ link_create(const char *name, struct link_params *params)
/* Port */
memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
if (rss) {
- port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
port_conf.rx_adv_conf.rss_conf.rss_hf =
- (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+ (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
port_info.flow_type_rss_offloads;
}
@@ -267,5 +267,5 @@ link_is_up(const char *name)
if (rte_eth_link_get(link->port_id, &link_params) < 0)
return 0;
- return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+ return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
}
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8645ac790be4..8aabea002bbb 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -161,22 +161,22 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
.split_hdr_size = 0,
- .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ .offloads = (RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME),
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS),
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
},
};
@@ -740,7 +740,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -1097,9 +1097,9 @@ main(int argc, char **argv)
n_tx_queue = nb_lcores;
if (n_tx_queue > MAX_TX_QUEUE_PER_PORT)
n_tx_queue = MAX_TX_QUEUE_PER_PORT;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index f252d34985b4..73932564e459 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -234,20 +234,20 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
- ETH_RSS_TCP | ETH_RSS_SCTP,
+ .rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -1456,10 +1456,10 @@ print_usage(const char *prgname)
" \"parallel\" : Parallel\n"
" --" CMD_LINE_OPT_RX_OFFLOAD
": bitmask of the RX HW offload capabilities to enable/use\n"
- " (DEV_RX_OFFLOAD_*)\n"
+ " (RTE_ETH_RX_OFFLOAD_*)\n"
" --" CMD_LINE_OPT_TX_OFFLOAD
": bitmask of the TX HW offload capabilities to enable/use\n"
- " (DEV_TX_OFFLOAD_*)\n"
+ " (RTE_ETH_TX_OFFLOAD_*)\n"
" --" CMD_LINE_OPT_REASSEMBLE " NUM"
": max number of entries in reassemble(fragment) table\n"
" (zero (default value) disables reassembly)\n"
@@ -1908,7 +1908,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2211,12 +2211,12 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
frame_size = MTU_TO_FRAMELEN(mtu_size);
if (frame_size > local_port_conf.rxmode.max_rx_pkt_len)
- local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ local_port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
local_port_conf.rxmode.max_rx_pkt_len = frame_size;
if (multi_seg_required()) {
- local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
- local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ local_port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+ local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
}
local_port_conf.rxmode.offloads |= req_rx_offloads;
@@ -2239,12 +2239,12 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
portid, local_port_conf.txmode.offloads,
dev_info.tx_offload_capa);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM)
- local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
+ local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
printf("port %u configurng rx_offloads=0x%" PRIx64
", tx_offloads=0x%" PRIx64 "\n",
@@ -2302,7 +2302,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
/* Pre-populate pkt offloads based on capabilities */
qconf->outbound.ipv4_offloads = PKT_TX_IPV4;
qconf->outbound.ipv6_offloads = PKT_TX_IPV6;
- if (local_port_conf.txmode.offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (local_port_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
qconf->outbound.ipv4_offloads |= PKT_TX_IP_CKSUM;
tx_queueid++;
@@ -2663,7 +2663,7 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads)
struct rte_flow *flow;
int ret;
- if (!(rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+ if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
return;
/* Add the default rte_flow to enable SECURITY for all ESP packets */
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 17a28556c971..5cdd794f017f 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -986,7 +986,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
if (inbound) {
if ((dev_info.rx_offload_capa &
- DEV_RX_OFFLOAD_SECURITY) == 0) {
+ RTE_ETH_RX_OFFLOAD_SECURITY) == 0) {
RTE_LOG(WARNING, PORT,
"hardware RX IPSec offload is not supported\n");
return -EINVAL;
@@ -994,7 +994,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
} else { /* outbound */
if ((dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_SECURITY) == 0) {
+ RTE_ETH_TX_OFFLOAD_SECURITY) == 0) {
RTE_LOG(WARNING, PORT,
"hardware TX IPSec offload is not supported\n");
return -EINVAL;
@@ -1628,7 +1628,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
rule_type ==
RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
&& rule->portid == port_id)
- *rx_offloads |= DEV_RX_OFFLOAD_SECURITY;
+ *rx_offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
}
/* Check for outbound rules that use offloads and use this port */
@@ -1639,7 +1639,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
rule_type ==
RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
&& rule->portid == port_id)
- *tx_offloads |= DEV_TX_OFFLOAD_SECURITY;
+ *tx_offloads |= RTE_ETH_TX_OFFLOAD_SECURITY;
}
return 0;
}
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index cc527d7f6b38..96fb325ff180 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -112,11 +112,11 @@ static struct rte_eth_conf port_conf = {
.rxmode = {
.max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
+ .offloads = RTE_ETH_RX_OFFLOAD_JUMBO_FRAME,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
},
};
@@ -620,7 +620,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/kni/main.c b/examples/kni/main.c
index beabb3c848aa..81124dc0dc88 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -95,7 +95,7 @@ static struct kni_port_params *kni_port_params_array[RTE_MAX_ETHPORTS];
/* Options for configuring ethernet port */
static struct rte_eth_conf port_conf = {
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -608,9 +608,9 @@ init_port(uint16_t port)
"Error during getting device (port %u) info: %s\n",
port, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(port, 1, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Could not configure port%u (%d)\n",
@@ -688,7 +688,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -792,9 +792,9 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
memcpy(&conf, &port_conf, sizeof(conf));
/* Set new MTU */
if (new_mtu > RTE_ETHER_MAX_LEN)
- conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
- conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ conf.rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
/* mtu + length of header + length of FCS = max pkt length */
conf.rxmode.max_rx_pkt_len = new_mtu + KNI_ENET_HEADER_SIZE +
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 5f539c458cdd..89489843e2bd 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -216,12 +216,12 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -1809,7 +1809,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2633,9 +2633,9 @@ initialize_ports(struct l2fwd_crypto_options *options)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
retval = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (retval < 0) {
printf("Cannot configure device: err=%d, port=%u\n",
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index b8c1e02d7598..80a72f7095cf 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -15,7 +15,7 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
uint16_t nb_ports_available = 0;
@@ -23,9 +23,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
int ret;
if (rsrc->event_mode) {
- port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
- port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
+ port_conf.rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP;
}
/* Initialise each port */
@@ -61,9 +61,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
local_port_conf.rx_adv_conf.rss_conf.rss_hf);
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure RX and TX queue. 8< */
ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 1db89f2bd139..9806204b81d1 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -395,7 +395,7 @@ check_all_ports_link_status(struct l2fwd_resources *rsrc,
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
index bbb4a27a6d54..2e50339afb61 100644
--- a/examples/l2fwd-jobstats/main.c
+++ b/examples/l2fwd-jobstats/main.c
@@ -94,7 +94,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -726,7 +726,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -869,9 +869,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the RX and TX queues. 8< */
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c
index 4e1a17cfe4f5..d228a842788d 100644
--- a/examples/l2fwd-keepalive/main.c
+++ b/examples/l2fwd-keepalive/main.c
@@ -83,7 +83,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -478,7 +478,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -650,9 +650,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE,
diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index 911e40c66e0e..b4a69dde63dc 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -95,7 +95,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -606,7 +606,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -792,9 +792,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the number of queues for a port. */
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index a1f457b564b6..9323426e9b1d 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -124,20 +124,20 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
- ETH_RSS_TCP | ETH_RSS_SCTP,
+ .rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -1815,9 +1815,9 @@ parse_args(int argc, char **argv)
printf("jumbo frame is enabled\n");
port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/*
* if no max-pkt-len set, then use the
@@ -1970,7 +1970,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2080,9 +2080,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index a0de8ca9b42d..278fe95970f3 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -111,18 +111,18 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -494,8 +494,8 @@ parse_args(int argc, char **argv)
const struct option lenopts = {"max-pkt-len",
required_argument, 0, 0};
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+ port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/*
* if no max-pkt-len set, use the default
@@ -628,7 +628,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* Clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -807,9 +807,9 @@ main(int argc, char **argv)
nb_rx_queue, n_tx_queue);
rte_eth_dev_info_get(portid, &dev_info);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index aa7b8db44ae8..85609e9d4593 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -250,19 +250,19 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_UDP,
+ .rss_hf = RTE_ETH_RSS_UDP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
}
};
@@ -1961,9 +1961,9 @@ parse_args(int argc, char **argv)
printf("jumbo frame is enabled \n");
port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/**
* if no max-pkt-len set, use the default value
@@ -2222,7 +2222,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2622,9 +2622,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 961860ea18ef..7c7613a83aad 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -75,9 +75,9 @@ l3fwd_eth_dev_port_setup(struct rte_eth_conf *port_conf)
rte_panic("Error during getting device (port %u) info:"
"%s\n", port_id, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 00ac267af1dd..500444565463 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -120,19 +120,19 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -703,8 +703,8 @@ parse_args(int argc, char **argv)
"max-pkt-len", required_argument, 0, 0
};
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+ port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/*
* if no max-pkt-len set, use the default
@@ -926,7 +926,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -1035,15 +1035,15 @@ l3fwd_poll_resource_setup(void)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
if (dev_info.max_rx_queues == 1)
- local_port_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+ local_port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
port_conf.rx_adv_conf.rss_conf.rss_hf) {
diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c
index 7470aa539a90..6880b58476f4 100644
--- a/examples/link_status_interrupt/main.c
+++ b/examples/link_status_interrupt/main.c
@@ -83,7 +83,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.intr_conf = {
.lsc = 1, /**< lsc interrupt feature enabled */
@@ -147,7 +147,7 @@ print_stats(void)
link_get_err < 0 ? "0" :
rte_eth_link_speed_to_str(link.link_speed),
link_get_err < 0 ? "Link get failed" :
- (link.link_duplex == ETH_LINK_FULL_DUPLEX ? \
+ (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex"),
port_statistics[portid].tx,
port_statistics[portid].rx,
@@ -507,7 +507,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -634,9 +634,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure RX and TX queues. 8< */
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/multi_process/client_server_mp/mp_server/init.c b/examples/multi_process/client_server_mp/mp_server/init.c
index 1ad71ca7ec5f..23307073c904 100644
--- a/examples/multi_process/client_server_mp/mp_server/init.c
+++ b/examples/multi_process/client_server_mp/mp_server/init.c
@@ -94,7 +94,7 @@ init_port(uint16_t port_num)
/* for port configuration all features are off by default */
const struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS
+ .mq_mode = RTE_ETH_MQ_RX_RSS
}
};
const uint16_t rx_rings = 1, tx_rings = num_clients;
@@ -213,7 +213,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/multi_process/symmetric_mp/main.c b/examples/multi_process/symmetric_mp/main.c
index 01dc3acf34d5..85955375f1bf 100644
--- a/examples/multi_process/symmetric_mp/main.c
+++ b/examples/multi_process/symmetric_mp/main.c
@@ -176,18 +176,18 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
{
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
}
};
const uint16_t rx_rings = num_queues, tx_rings = num_queues;
@@ -218,9 +218,9 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
info.default_rxconf.rx_drop_en = 1;
- if (info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
port_conf.rx_adv_conf.rss_conf.rss_hf &= info.flow_type_rss_offloads;
@@ -392,7 +392,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
index e9a388710647..f110fc129f55 100644
--- a/examples/ntb/ntb_fwd.c
+++ b/examples/ntb/ntb_fwd.c
@@ -89,17 +89,17 @@ static uint16_t pkt_burst = NTB_DFLT_PKT_BURST;
static struct rte_eth_conf eth_port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c
index d2fe9f6b50d8..eb15899c902f 100644
--- a/examples/packet_ordering/main.c
+++ b/examples/packet_ordering/main.c
@@ -294,9 +294,9 @@ configure_eth_port(uint16_t port_id)
return ret;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(port_id, rxRings, txRings, &port_conf);
if (ret != 0)
return ret;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 2f593abf263d..86671655b432 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -307,19 +307,19 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_TCP,
+ .rss_hf = RTE_ETH_RSS_TCP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -2988,9 +2988,9 @@ parse_args(int argc, char **argv)
printf("jumbo frame is enabled - disabling simple TX path\n");
port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* if no max-pkt-len set, use the default value
* RTE_ETHER_MAX_LEN
@@ -3466,7 +3466,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -3577,9 +3577,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 467cda5a6dac..7ea670f109b8 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -133,7 +133,7 @@ mempool_find(struct obj *obj, const char *name)
static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
.split_hdr_size = 0, /* Header split buffer size */
},
@@ -145,12 +145,12 @@ static struct rte_eth_conf port_conf_default = {
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
-#define RETA_CONF_SIZE (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
static int
rss_setup(uint16_t port_id,
@@ -165,11 +165,11 @@ rss_setup(uint16_t port_id,
memset(reta_conf, 0, sizeof(reta_conf));
for (i = 0; i < reta_size; i++)
- reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+ reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
for (i = 0; i < reta_size; i++) {
- uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
- uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+ uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+ uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
uint32_t rss_qs_pos = i % rss->n_queues;
reta_conf[reta_id].reta[reta_pos] =
@@ -227,7 +227,7 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
rss = params->rx.rss;
if (rss) {
if ((port_info.reta_size == 0) ||
- (port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+ (port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
return NULL;
if ((rss->n_queues == 0) ||
@@ -245,9 +245,9 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
/* Port */
memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
if (rss) {
- port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
port_conf.rx_adv_conf.rss_conf.rss_hf =
- (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+ (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
port_info.flow_type_rss_offloads;
}
@@ -356,7 +356,7 @@ link_is_up(struct obj *obj, const char *name)
if (rte_eth_link_get(link->port_id, &link_params) < 0)
return 0;
- return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+ return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
}
struct link *
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 4f32ade7fbf7..db32b0d6c427 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -197,14 +197,14 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Force full Tx path in the driver, required for IEEE1588 */
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index 7ffccc8369dc..5ef14c176b11 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -51,19 +51,19 @@ static struct rte_mempool *pool = NULL;
***/
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_DCB_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -333,8 +333,8 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
port_rx, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
- conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+ conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
if (conf.rx_adv_conf.rss_conf.rss_hf !=
@@ -379,8 +379,8 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
port_tx, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
- conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+ conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
if (conf.rx_adv_conf.rss_conf.rss_hf !=
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1abe003fc6ae..e750928fb89d 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -61,7 +61,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_DCB_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -106,9 +106,9 @@ app_init_port(uint16_t portid, struct rte_mempool *mp)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE,
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index 6f20f98b2b30..08df716dc0fb 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -145,17 +145,17 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
if (hw_timestamping) {
- if (!(dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)) {
+ if (!(dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
printf("\nERROR: Port %u does not support hardware timestamping\n"
, port);
return -1;
}
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
rte_mbuf_dyn_rx_timestamp_register(&hwts_dynfield_offset, NULL);
if (hwts_dynfield_offset < 0) {
printf("ERROR: Failed to register timestamp field\n");
diff --git a/examples/server_node_efd/server/init.c b/examples/server_node_efd/server/init.c
index 9ebd88bac20e..074fee5b26b2 100644
--- a/examples/server_node_efd/server/init.c
+++ b/examples/server_node_efd/server/init.c
@@ -96,7 +96,7 @@ init_port(uint16_t port_num)
/* for port configuration all features are off by default */
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
};
const uint16_t rx_rings = 1, tx_rings = num_nodes;
@@ -115,9 +115,9 @@ init_port(uint16_t port_num)
if (retval != 0)
return retval;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/*
* Standard DPDK port initialisation - config port, then set up
@@ -277,7 +277,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index ae08261befd7..737df4ca2a17 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -55,9 +55,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index bc3d71c8984e..b1d363ae21db 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -109,23 +109,23 @@ static int nb_sockets;
/* empty vmdq configuration structure. Filled in programatically */
static struct rte_eth_conf vmdq_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_VMDQ_ONLY,
+ .mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY,
.split_hdr_size = 0,
/*
* VLAN strip is necessary for 1G NIC such as I350,
* this fixes bug of ipv4 forwarding in guest can't
* forward pakets from one virtio dev to another virtio dev.
*/
- .offloads = DEV_RX_OFFLOAD_VLAN_STRIP,
+ .offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO),
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO),
},
.rx_adv_conf = {
/*
@@ -133,7 +133,7 @@ static struct rte_eth_conf vmdq_conf_default = {
* appropriate values
*/
.vmdq_rx_conf = {
- .nb_queue_pools = ETH_8_POOLS,
+ .nb_queue_pools = RTE_ETH_8_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -290,9 +290,9 @@ port_init(uint16_t port)
return -1;
rx_rings = (uint16_t)dev_info.max_rx_queues;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
if (retval != 0) {
@@ -562,8 +562,8 @@ us_vhost_parse_args(int argc, char **argv)
case 'P':
promiscuous = 1;
vmdq_conf_default.rx_adv_conf.vmdq_rx_conf.rx_mode =
- ETH_VMDQ_ACCEPT_BROADCAST |
- ETH_VMDQ_ACCEPT_MULTICAST;
+ RTE_ETH_VMDQ_ACCEPT_BROADCAST |
+ RTE_ETH_VMDQ_ACCEPT_MULTICAST;
break;
case OPT_VM2VM_NUM:
@@ -638,7 +638,7 @@ us_vhost_parse_args(int argc, char **argv)
mergeable = !!ret;
if (ret) {
vmdq_conf_default.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
vmdq_conf_default.rxmode.max_rx_pkt_len
= JUMBO_FRAME_MAX_SIZE;
}
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index 7d5bf6855426..dddcde40efe2 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -78,9 +78,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
@@ -278,7 +278,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
index d3bc19f78ee5..16782a5d850f 100644
--- a/examples/vmdq/main.c
+++ b/examples/vmdq/main.c
@@ -66,12 +66,12 @@ static uint8_t rss_enable;
/* empty vmdq configuration structure. Filled in programatically */
static const struct rte_eth_conf vmdq_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_VMDQ_ONLY,
+ .mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.rx_adv_conf = {
/*
@@ -79,7 +79,7 @@ static const struct rte_eth_conf vmdq_conf_default = {
* appropriate values
*/
.vmdq_rx_conf = {
- .nb_queue_pools = ETH_8_POOLS,
+ .nb_queue_pools = RTE_ETH_8_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -157,11 +157,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_pools)
(void)(rte_memcpy(ð_conf->rx_adv_conf.vmdq_rx_conf, &conf,
sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf)));
if (rss_enable) {
- eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
- eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
- ETH_RSS_UDP |
- ETH_RSS_TCP |
- ETH_RSS_SCTP;
+ eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
+ eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_SCTP;
}
return 0;
}
@@ -259,9 +259,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
retval = rte_eth_dev_configure(port, rxRings, txRings, &port_conf);
if (retval != 0)
return retval;
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index 685a03bdd194..3677a34da849 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -60,8 +60,8 @@ static uint16_t ports[RTE_MAX_ETHPORTS];
static unsigned num_ports;
/* number of pools (if user does not specify any, 32 by default */
-static enum rte_eth_nb_pools num_pools = ETH_32_POOLS;
-static enum rte_eth_nb_tcs num_tcs = ETH_4_TCS;
+static enum rte_eth_nb_pools num_pools = RTE_ETH_32_POOLS;
+static enum rte_eth_nb_tcs num_tcs = RTE_ETH_4_TCS;
static uint16_t num_queues, num_vmdq_queues;
static uint16_t vmdq_pool_base, vmdq_queue_base;
static uint8_t rss_enable;
@@ -69,11 +69,11 @@ static uint8_t rss_enable;
/* Empty vmdq+dcb configuration structure. Filled in programmatically. 8< */
static const struct rte_eth_conf vmdq_dcb_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_VMDQ_DCB,
+ .mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_VMDQ_DCB,
+ .mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB,
},
/*
* should be overridden separately in code with
@@ -81,7 +81,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
*/
.rx_adv_conf = {
.vmdq_dcb_conf = {
- .nb_queue_pools = ETH_32_POOLS,
+ .nb_queue_pools = RTE_ETH_32_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -89,12 +89,12 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
.dcb_tc = {0},
},
.dcb_rx_conf = {
- .nb_tcs = ETH_4_TCS,
+ .nb_tcs = RTE_ETH_4_TCS,
/** Traffic class each UP mapped to. */
.dcb_tc = {0},
},
.vmdq_rx_conf = {
- .nb_queue_pools = ETH_32_POOLS,
+ .nb_queue_pools = RTE_ETH_32_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -103,7 +103,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
},
.tx_adv_conf = {
.vmdq_dcb_tx_conf = {
- .nb_queue_pools = ETH_32_POOLS,
+ .nb_queue_pools = RTE_ETH_32_POOLS,
.dcb_tc = {0},
},
},
@@ -157,7 +157,7 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
conf.pool_map[i].pools = 1UL << i;
vmdq_conf.pool_map[i].pools = 1UL << i;
}
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
conf.dcb_tc[i] = i % num_tcs;
dcb_conf.dcb_tc[i] = i % num_tcs;
tx_conf.dcb_tc[i] = i % num_tcs;
@@ -173,11 +173,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
(void)(rte_memcpy(ð_conf->tx_adv_conf.vmdq_dcb_tx_conf, &tx_conf,
sizeof(tx_conf)));
if (rss_enable) {
- eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
- eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
- ETH_RSS_UDP |
- ETH_RSS_TCP |
- ETH_RSS_SCTP;
+ eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
+ eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_SCTP;
}
return 0;
}
@@ -271,9 +271,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
port_conf.rx_adv_conf.rss_conf.rss_hf &=
@@ -390,9 +390,9 @@ vmdq_parse_num_pools(const char *q_arg)
if (n != 16 && n != 32)
return -1;
if (n == 16)
- num_pools = ETH_16_POOLS;
+ num_pools = RTE_ETH_16_POOLS;
else
- num_pools = ETH_32_POOLS;
+ num_pools = RTE_ETH_32_POOLS;
return 0;
}
@@ -412,9 +412,9 @@ vmdq_parse_num_tcs(const char *q_arg)
if (n != 4 && n != 8)
return -1;
if (n == 4)
- num_tcs = ETH_4_TCS;
+ num_tcs = RTE_ETH_4_TCS;
else
- num_tcs = ETH_8_TCS;
+ num_tcs = RTE_ETH_8_TCS;
return 0;
}
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 9d95cd11e1b5..9ccbd7db4063 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -98,9 +98,6 @@ static const struct rte_eth_xstats_name_off eth_dev_txq_stats_strings[] = {
#define RTE_NB_TXQ_STATS RTE_DIM(eth_dev_txq_stats_strings)
#define RTE_RX_OFFLOAD_BIT2STR(_name) \
- { DEV_RX_OFFLOAD_##_name, #_name }
-
-#define RTE_ETH_RX_OFFLOAD_BIT2STR(_name) \
{ RTE_ETH_RX_OFFLOAD_##_name, #_name }
static const struct {
@@ -126,14 +123,14 @@ static const struct {
RTE_RX_OFFLOAD_BIT2STR(SCTP_CKSUM),
RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
- RTE_ETH_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
+ RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
};
#undef RTE_RX_OFFLOAD_BIT2STR
#undef RTE_ETH_RX_OFFLOAD_BIT2STR
#define RTE_TX_OFFLOAD_BIT2STR(_name) \
- { DEV_TX_OFFLOAD_##_name, #_name }
+ { RTE_ETH_TX_OFFLOAD_##_name, #_name }
static const struct {
uint64_t offload;
@@ -1184,32 +1181,32 @@ uint32_t
rte_eth_speed_bitflag(uint32_t speed, int duplex)
{
switch (speed) {
- case ETH_SPEED_NUM_10M:
- return duplex ? ETH_LINK_SPEED_10M : ETH_LINK_SPEED_10M_HD;
- case ETH_SPEED_NUM_100M:
- return duplex ? ETH_LINK_SPEED_100M : ETH_LINK_SPEED_100M_HD;
- case ETH_SPEED_NUM_1G:
- return ETH_LINK_SPEED_1G;
- case ETH_SPEED_NUM_2_5G:
- return ETH_LINK_SPEED_2_5G;
- case ETH_SPEED_NUM_5G:
- return ETH_LINK_SPEED_5G;
- case ETH_SPEED_NUM_10G:
- return ETH_LINK_SPEED_10G;
- case ETH_SPEED_NUM_20G:
- return ETH_LINK_SPEED_20G;
- case ETH_SPEED_NUM_25G:
- return ETH_LINK_SPEED_25G;
- case ETH_SPEED_NUM_40G:
- return ETH_LINK_SPEED_40G;
- case ETH_SPEED_NUM_50G:
- return ETH_LINK_SPEED_50G;
- case ETH_SPEED_NUM_56G:
- return ETH_LINK_SPEED_56G;
- case ETH_SPEED_NUM_100G:
- return ETH_LINK_SPEED_100G;
- case ETH_SPEED_NUM_200G:
- return ETH_LINK_SPEED_200G;
+ case RTE_ETH_SPEED_NUM_10M:
+ return duplex ? RTE_ETH_LINK_SPEED_10M : RTE_ETH_LINK_SPEED_10M_HD;
+ case RTE_ETH_SPEED_NUM_100M:
+ return duplex ? RTE_ETH_LINK_SPEED_100M : RTE_ETH_LINK_SPEED_100M_HD;
+ case RTE_ETH_SPEED_NUM_1G:
+ return RTE_ETH_LINK_SPEED_1G;
+ case RTE_ETH_SPEED_NUM_2_5G:
+ return RTE_ETH_LINK_SPEED_2_5G;
+ case RTE_ETH_SPEED_NUM_5G:
+ return RTE_ETH_LINK_SPEED_5G;
+ case RTE_ETH_SPEED_NUM_10G:
+ return RTE_ETH_LINK_SPEED_10G;
+ case RTE_ETH_SPEED_NUM_20G:
+ return RTE_ETH_LINK_SPEED_20G;
+ case RTE_ETH_SPEED_NUM_25G:
+ return RTE_ETH_LINK_SPEED_25G;
+ case RTE_ETH_SPEED_NUM_40G:
+ return RTE_ETH_LINK_SPEED_40G;
+ case RTE_ETH_SPEED_NUM_50G:
+ return RTE_ETH_LINK_SPEED_50G;
+ case RTE_ETH_SPEED_NUM_56G:
+ return RTE_ETH_LINK_SPEED_56G;
+ case RTE_ETH_SPEED_NUM_100G:
+ return RTE_ETH_LINK_SPEED_100G;
+ case RTE_ETH_SPEED_NUM_200G:
+ return RTE_ETH_LINK_SPEED_200G;
default:
return 0;
}
@@ -1458,7 +1455,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
* If jumbo frames are enabled, check that the maximum RX packet
* length is supported by the configured device.
*/
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
RTE_ETHDEV_LOG(ERR,
"Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
@@ -1491,7 +1488,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
if (dev_conf->rxmode.max_lro_pkt_size == 0)
dev->data->dev_conf.rxmode.max_lro_pkt_size =
dev->data->dev_conf.rxmode.max_rx_pkt_len;
@@ -1543,12 +1540,12 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
/* Check if Rx RSS distribution is disabled but RSS hash is enabled. */
- if (((dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) == 0) &&
- (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
+ (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
RTE_ETHDEV_LOG(ERR,
"Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested\n",
port_id,
- rte_eth_dev_rx_offload_name(DEV_RX_OFFLOAD_RSS_HASH));
+ rte_eth_dev_rx_offload_name(RTE_ETH_RX_OFFLOAD_RSS_HASH));
ret = -EINVAL;
goto rollback;
}
@@ -2157,7 +2154,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
- if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (local_conf.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
dev->data->dev_conf.rxmode.max_lro_pkt_size =
dev->data->dev_conf.rxmode.max_rx_pkt_len;
@@ -2752,21 +2749,21 @@ const char *
rte_eth_link_speed_to_str(uint32_t link_speed)
{
switch (link_speed) {
- case ETH_SPEED_NUM_NONE: return "None";
- case ETH_SPEED_NUM_10M: return "10 Mbps";
- case ETH_SPEED_NUM_100M: return "100 Mbps";
- case ETH_SPEED_NUM_1G: return "1 Gbps";
- case ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
- case ETH_SPEED_NUM_5G: return "5 Gbps";
- case ETH_SPEED_NUM_10G: return "10 Gbps";
- case ETH_SPEED_NUM_20G: return "20 Gbps";
- case ETH_SPEED_NUM_25G: return "25 Gbps";
- case ETH_SPEED_NUM_40G: return "40 Gbps";
- case ETH_SPEED_NUM_50G: return "50 Gbps";
- case ETH_SPEED_NUM_56G: return "56 Gbps";
- case ETH_SPEED_NUM_100G: return "100 Gbps";
- case ETH_SPEED_NUM_200G: return "200 Gbps";
- case ETH_SPEED_NUM_UNKNOWN: return "Unknown";
+ case RTE_ETH_SPEED_NUM_NONE: return "None";
+ case RTE_ETH_SPEED_NUM_10M: return "10 Mbps";
+ case RTE_ETH_SPEED_NUM_100M: return "100 Mbps";
+ case RTE_ETH_SPEED_NUM_1G: return "1 Gbps";
+ case RTE_ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
+ case RTE_ETH_SPEED_NUM_5G: return "5 Gbps";
+ case RTE_ETH_SPEED_NUM_10G: return "10 Gbps";
+ case RTE_ETH_SPEED_NUM_20G: return "20 Gbps";
+ case RTE_ETH_SPEED_NUM_25G: return "25 Gbps";
+ case RTE_ETH_SPEED_NUM_40G: return "40 Gbps";
+ case RTE_ETH_SPEED_NUM_50G: return "50 Gbps";
+ case RTE_ETH_SPEED_NUM_56G: return "56 Gbps";
+ case RTE_ETH_SPEED_NUM_100G: return "100 Gbps";
+ case RTE_ETH_SPEED_NUM_200G: return "200 Gbps";
+ case RTE_ETH_SPEED_NUM_UNKNOWN: return "Unknown";
default: return "Invalid";
}
}
@@ -2790,14 +2787,14 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
return -EINVAL;
}
- if (eth_link->link_status == ETH_LINK_DOWN)
+ if (eth_link->link_status == RTE_ETH_LINK_DOWN)
return snprintf(str, len, "Link down");
else
return snprintf(str, len, "Link up at %s %s %s",
rte_eth_link_speed_to_str(eth_link->link_speed),
- (eth_link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ (eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
"FDX" : "HDX",
- (eth_link->link_autoneg == ETH_LINK_AUTONEG) ?
+ (eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
"Autoneg" : "Fixed");
}
@@ -3663,7 +3660,7 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on)
dev = &rte_eth_devices[port_id];
if (!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)) {
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
RTE_ETHDEV_LOG(ERR, "Port %u: vlan-filtering disabled\n",
port_id);
return -ENOSYS;
@@ -3750,44 +3747,44 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask)
dev_offloads = orig_offloads;
/* check which option changed by application */
- cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ cur = !!(offload_mask & RTE_ETH_VLAN_STRIP_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
- mask |= ETH_VLAN_STRIP_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+ mask |= RTE_ETH_VLAN_STRIP_MASK;
}
- cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+ cur = !!(offload_mask & RTE_ETH_VLAN_FILTER_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
- mask |= ETH_VLAN_FILTER_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+ mask |= RTE_ETH_VLAN_FILTER_MASK;
}
- cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND);
+ cur = !!(offload_mask & RTE_ETH_VLAN_EXTEND_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
- mask |= ETH_VLAN_EXTEND_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
+ mask |= RTE_ETH_VLAN_EXTEND_MASK;
}
- cur = !!(offload_mask & ETH_QINQ_STRIP_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP);
+ cur = !!(offload_mask & RTE_ETH_QINQ_STRIP_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
- mask |= ETH_QINQ_STRIP_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
+ mask |= RTE_ETH_QINQ_STRIP_MASK;
}
/*no change*/
@@ -3832,17 +3829,17 @@ rte_eth_dev_get_vlan_offload(uint16_t port_id)
dev = &rte_eth_devices[port_id];
dev_offloads = &dev->data->dev_conf.rxmode.offloads;
- if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- ret |= ETH_VLAN_STRIP_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ ret |= RTE_ETH_VLAN_STRIP_OFFLOAD;
- if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
- ret |= ETH_VLAN_FILTER_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+ ret |= RTE_ETH_VLAN_FILTER_OFFLOAD;
- if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
- ret |= ETH_VLAN_EXTEND_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
+ ret |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
- if (*dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
- ret |= ETH_QINQ_STRIP_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+ ret |= RTE_ETH_QINQ_STRIP_OFFLOAD;
return ret;
}
@@ -3919,7 +3916,7 @@ rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id,
return -EINVAL;
}
- if (pfc_conf->priority > (ETH_DCB_NUM_USER_PRIORITIES - 1)) {
+ if (pfc_conf->priority > (RTE_ETH_DCB_NUM_USER_PRIORITIES - 1)) {
RTE_ETHDEV_LOG(ERR, "Invalid priority, only 0-7 allowed\n");
return -EINVAL;
}
@@ -3937,7 +3934,7 @@ eth_check_reta_mask(struct rte_eth_rss_reta_entry64 *reta_conf,
{
uint16_t i, num;
- num = (reta_size + RTE_RETA_GROUP_SIZE - 1) / RTE_RETA_GROUP_SIZE;
+ num = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) / RTE_ETH_RETA_GROUP_SIZE;
for (i = 0; i < num; i++) {
if (reta_conf[i].mask)
return 0;
@@ -3959,8 +3956,8 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
}
for (i = 0; i < reta_size; i++) {
- idx = i / RTE_RETA_GROUP_SIZE;
- shift = i % RTE_RETA_GROUP_SIZE;
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
if ((reta_conf[idx].mask & (1ULL << shift)) &&
(reta_conf[idx].reta[shift] >= max_rxq)) {
RTE_ETHDEV_LOG(ERR,
@@ -4116,7 +4113,7 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id,
return -EINVAL;
}
- if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+ if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
return -EINVAL;
}
@@ -4142,7 +4139,7 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id,
return -EINVAL;
}
- if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+ if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
return -EINVAL;
}
@@ -4283,8 +4280,8 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
port_id);
return -EINVAL;
}
- if (pool >= ETH_64_POOLS) {
- RTE_ETHDEV_LOG(ERR, "Pool id must be 0-%d\n", ETH_64_POOLS - 1);
+ if (pool >= RTE_ETH_64_POOLS) {
+ RTE_ETHDEV_LOG(ERR, "Pool id must be 0-%d\n", RTE_ETH_64_POOLS - 1);
return -EINVAL;
}
@@ -4548,21 +4545,21 @@ rte_eth_mirror_rule_set(uint16_t port_id,
return -EINVAL;
}
- if (mirror_conf->dst_pool >= ETH_64_POOLS) {
+ if (mirror_conf->dst_pool >= RTE_ETH_64_POOLS) {
RTE_ETHDEV_LOG(ERR, "Invalid dst pool, pool id must be 0-%d\n",
- ETH_64_POOLS - 1);
+ RTE_ETH_64_POOLS - 1);
return -EINVAL;
}
- if ((mirror_conf->rule_type & (ETH_MIRROR_VIRTUAL_POOL_UP |
- ETH_MIRROR_VIRTUAL_POOL_DOWN)) &&
+ if ((mirror_conf->rule_type & (RTE_ETH_MIRROR_VIRTUAL_POOL_UP |
+ RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN)) &&
(mirror_conf->pool_mask == 0)) {
RTE_ETHDEV_LOG(ERR,
"Invalid mirror pool, pool mask can not be 0\n");
return -EINVAL;
}
- if ((mirror_conf->rule_type & ETH_MIRROR_VLAN) &&
+ if ((mirror_conf->rule_type & RTE_ETH_MIRROR_VLAN) &&
mirror_conf->vlan.vlan_mask == 0) {
RTE_ETHDEV_LOG(ERR,
"Invalid vlan mask, vlan mask can not be 0\n");
@@ -6238,7 +6235,7 @@ eth_dev_handle_port_link_status(const char *cmd __rte_unused,
rte_tel_data_add_dict_string(d, status_str, "UP");
rte_tel_data_add_dict_u64(d, "speed", link.link_speed);
rte_tel_data_add_dict_string(d, "duplex",
- (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
"full-duplex" : "half-duplex");
return 0;
}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index d2b27c351fdb..cabfe452c808 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -249,7 +249,7 @@ void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter);
* field is not supported, its value is 0.
* All byte-related statistics do not include Ethernet FCS regardless
* of whether these bytes have been delivered to the application
- * (see DEV_RX_OFFLOAD_KEEP_CRC).
+ * (see RTE_ETH_RX_OFFLOAD_KEEP_CRC).
*/
struct rte_eth_stats {
uint64_t ipackets; /**< Total number of successfully received packets. */
@@ -279,61 +279,99 @@ struct rte_eth_stats {
/**
* Device supported speeds bitmap flags
*/
-#define ETH_LINK_SPEED_AUTONEG (0 << 0) /**< Autonegotiate (all speeds) */
-#define ETH_LINK_SPEED_FIXED (1 << 0) /**< Disable autoneg (fixed speed) */
-#define ETH_LINK_SPEED_10M_HD (1 << 1) /**< 10 Mbps half-duplex */
-#define ETH_LINK_SPEED_10M (1 << 2) /**< 10 Mbps full-duplex */
-#define ETH_LINK_SPEED_100M_HD (1 << 3) /**< 100 Mbps half-duplex */
-#define ETH_LINK_SPEED_100M (1 << 4) /**< 100 Mbps full-duplex */
-#define ETH_LINK_SPEED_1G (1 << 5) /**< 1 Gbps */
-#define ETH_LINK_SPEED_2_5G (1 << 6) /**< 2.5 Gbps */
-#define ETH_LINK_SPEED_5G (1 << 7) /**< 5 Gbps */
-#define ETH_LINK_SPEED_10G (1 << 8) /**< 10 Gbps */
-#define ETH_LINK_SPEED_20G (1 << 9) /**< 20 Gbps */
-#define ETH_LINK_SPEED_25G (1 << 10) /**< 25 Gbps */
-#define ETH_LINK_SPEED_40G (1 << 11) /**< 40 Gbps */
-#define ETH_LINK_SPEED_50G (1 << 12) /**< 50 Gbps */
-#define ETH_LINK_SPEED_56G (1 << 13) /**< 56 Gbps */
-#define ETH_LINK_SPEED_100G (1 << 14) /**< 100 Gbps */
-#define ETH_LINK_SPEED_200G (1 << 15) /**< 200 Gbps */
+#define RTE_ETH_LINK_SPEED_AUTONEG (0 << 0) /**< Autonegotiate (all speeds) */
+#define ETH_LINK_SPEED_AUTONEG RTE_ETH_LINK_SPEED_AUTONEG
+#define RTE_ETH_LINK_SPEED_FIXED (1 << 0) /**< Disable autoneg (fixed speed) */
+#define ETH_LINK_SPEED_FIXED RTE_ETH_LINK_SPEED_FIXED
+#define RTE_ETH_LINK_SPEED_10M_HD (1 << 1) /**< 10 Mbps half-duplex */
+#define ETH_LINK_SPEED_10M_HD RTE_ETH_LINK_SPEED_10M_HD
+#define RTE_ETH_LINK_SPEED_10M (1 << 2) /**< 10 Mbps full-duplex */
+#define ETH_LINK_SPEED_10M RTE_ETH_LINK_SPEED_10M
+#define RTE_ETH_LINK_SPEED_100M_HD (1 << 3) /**< 100 Mbps half-duplex */
+#define ETH_LINK_SPEED_100M_HD RTE_ETH_LINK_SPEED_100M_HD
+#define RTE_ETH_LINK_SPEED_100M (1 << 4) /**< 100 Mbps full-duplex */
+#define ETH_LINK_SPEED_100M RTE_ETH_LINK_SPEED_100M
+#define RTE_ETH_LINK_SPEED_1G (1 << 5) /**< 1 Gbps */
+#define ETH_LINK_SPEED_1G RTE_ETH_LINK_SPEED_1G
+#define RTE_ETH_LINK_SPEED_2_5G (1 << 6) /**< 2.5 Gbps */
+#define ETH_LINK_SPEED_2_5G RTE_ETH_LINK_SPEED_2_5G
+#define RTE_ETH_LINK_SPEED_5G (1 << 7) /**< 5 Gbps */
+#define ETH_LINK_SPEED_5G RTE_ETH_LINK_SPEED_5G
+#define RTE_ETH_LINK_SPEED_10G (1 << 8) /**< 10 Gbps */
+#define ETH_LINK_SPEED_10G RTE_ETH_LINK_SPEED_10G
+#define RTE_ETH_LINK_SPEED_20G (1 << 9) /**< 20 Gbps */
+#define ETH_LINK_SPEED_20G RTE_ETH_LINK_SPEED_20G
+#define RTE_ETH_LINK_SPEED_25G (1 << 10) /**< 25 Gbps */
+#define ETH_LINK_SPEED_25G RTE_ETH_LINK_SPEED_25G
+#define RTE_ETH_LINK_SPEED_40G (1 << 11) /**< 40 Gbps */
+#define ETH_LINK_SPEED_40G RTE_ETH_LINK_SPEED_40G
+#define RTE_ETH_LINK_SPEED_50G (1 << 12) /**< 50 Gbps */
+#define ETH_LINK_SPEED_50G RTE_ETH_LINK_SPEED_50G
+#define RTE_ETH_LINK_SPEED_56G (1 << 13) /**< 56 Gbps */
+#define ETH_LINK_SPEED_56G RTE_ETH_LINK_SPEED_56G
+#define RTE_ETH_LINK_SPEED_100G (1 << 14) /**< 100 Gbps */
+#define ETH_LINK_SPEED_100G RTE_ETH_LINK_SPEED_100G
+#define RTE_ETH_LINK_SPEED_200G (1 << 15) /**< 200 Gbps */
+#define ETH_LINK_SPEED_200G RTE_ETH_LINK_SPEED_200G
/**
* Ethernet numeric link speeds in Mbps
*/
-#define ETH_SPEED_NUM_NONE 0 /**< Not defined */
-#define ETH_SPEED_NUM_10M 10 /**< 10 Mbps */
-#define ETH_SPEED_NUM_100M 100 /**< 100 Mbps */
-#define ETH_SPEED_NUM_1G 1000 /**< 1 Gbps */
-#define ETH_SPEED_NUM_2_5G 2500 /**< 2.5 Gbps */
-#define ETH_SPEED_NUM_5G 5000 /**< 5 Gbps */
-#define ETH_SPEED_NUM_10G 10000 /**< 10 Gbps */
-#define ETH_SPEED_NUM_20G 20000 /**< 20 Gbps */
-#define ETH_SPEED_NUM_25G 25000 /**< 25 Gbps */
-#define ETH_SPEED_NUM_40G 40000 /**< 40 Gbps */
-#define ETH_SPEED_NUM_50G 50000 /**< 50 Gbps */
-#define ETH_SPEED_NUM_56G 56000 /**< 56 Gbps */
-#define ETH_SPEED_NUM_100G 100000 /**< 100 Gbps */
-#define ETH_SPEED_NUM_200G 200000 /**< 200 Gbps */
-#define ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define RTE_ETH_SPEED_NUM_NONE 0 /**< Not defined */
+#define ETH_SPEED_NUM_NONE RTE_ETH_SPEED_NUM_NONE
+#define RTE_ETH_SPEED_NUM_10M 10 /**< 10 Mbps */
+#define ETH_SPEED_NUM_10M RTE_ETH_SPEED_NUM_10M
+#define RTE_ETH_SPEED_NUM_100M 100 /**< 100 Mbps */
+#define ETH_SPEED_NUM_100M RTE_ETH_SPEED_NUM_100M
+#define RTE_ETH_SPEED_NUM_1G 1000 /**< 1 Gbps */
+#define ETH_SPEED_NUM_1G RTE_ETH_SPEED_NUM_1G
+#define RTE_ETH_SPEED_NUM_2_5G 2500 /**< 2.5 Gbps */
+#define ETH_SPEED_NUM_2_5G RTE_ETH_SPEED_NUM_2_5G
+#define RTE_ETH_SPEED_NUM_5G 5000 /**< 5 Gbps */
+#define ETH_SPEED_NUM_5G RTE_ETH_SPEED_NUM_5G
+#define RTE_ETH_SPEED_NUM_10G 10000 /**< 10 Gbps */
+#define ETH_SPEED_NUM_10G RTE_ETH_SPEED_NUM_10G
+#define RTE_ETH_SPEED_NUM_20G 20000 /**< 20 Gbps */
+#define ETH_SPEED_NUM_20G RTE_ETH_SPEED_NUM_20G
+#define RTE_ETH_SPEED_NUM_25G 25000 /**< 25 Gbps */
+#define ETH_SPEED_NUM_25G RTE_ETH_SPEED_NUM_25G
+#define RTE_ETH_SPEED_NUM_40G 40000 /**< 40 Gbps */
+#define ETH_SPEED_NUM_40G RTE_ETH_SPEED_NUM_40G
+#define RTE_ETH_SPEED_NUM_50G 50000 /**< 50 Gbps */
+#define ETH_SPEED_NUM_50G RTE_ETH_SPEED_NUM_50G
+#define RTE_ETH_SPEED_NUM_56G 56000 /**< 56 Gbps */
+#define ETH_SPEED_NUM_56G RTE_ETH_SPEED_NUM_56G
+#define RTE_ETH_SPEED_NUM_100G 100000 /**< 100 Gbps */
+#define ETH_SPEED_NUM_100G RTE_ETH_SPEED_NUM_100G
+#define RTE_ETH_SPEED_NUM_200G 200000 /**< 200 Gbps */
+#define ETH_SPEED_NUM_200G RTE_ETH_SPEED_NUM_200G
+#define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define ETH_SPEED_NUM_UNKNOWN RTE_ETH_SPEED_NUM_UNKNOWN
/**
* A structure used to retrieve link-level information of an Ethernet port.
*/
__extension__
struct rte_eth_link {
- uint32_t link_speed; /**< ETH_SPEED_NUM_ */
- uint16_t link_duplex : 1; /**< ETH_LINK_[HALF/FULL]_DUPLEX */
- uint16_t link_autoneg : 1; /**< ETH_LINK_[AUTONEG/FIXED] */
- uint16_t link_status : 1; /**< ETH_LINK_[DOWN/UP] */
+ uint32_t link_speed; /**< RTE_ETH_SPEED_NUM_ */
+ uint16_t link_duplex : 1; /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+ uint16_t link_autoneg : 1; /**< RTE_ETH_LINK_[AUTONEG/FIXED] */
+ uint16_t link_status : 1; /**< RTE_ETH_LINK_[DOWN/UP] */
} __rte_aligned(8); /**< aligned for atomic64 read/write */
/* Utility constants */
-#define ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
-#define ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
-#define ETH_LINK_DOWN 0 /**< Link is down (see link_status). */
-#define ETH_LINK_UP 1 /**< Link is up (see link_status). */
-#define ETH_LINK_FIXED 0 /**< No autonegotiation (see link_autoneg). */
-#define ETH_LINK_AUTONEG 1 /**< Autonegotiated (see link_autoneg). */
+#define RTE_ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
+#define ETH_LINK_HALF_DUPLEX RTE_ETH_LINK_HALF_DUPLEX
+#define RTE_ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
+#define ETH_LINK_FULL_DUPLEX RTE_ETH_LINK_FULL_DUPLEX
+#define RTE_ETH_LINK_DOWN 0 /**< Link is down (see link_status). */
+#define ETH_LINK_DOWN RTE_ETH_LINK_DOWN
+#define RTE_ETH_LINK_UP 1 /**< Link is up (see link_status). */
+#define ETH_LINK_UP RTE_ETH_LINK_UP
+#define RTE_ETH_LINK_FIXED 0 /**< No autonegotiation (see link_autoneg). */
+#define ETH_LINK_FIXED RTE_ETH_LINK_FIXED
+#define RTE_ETH_LINK_AUTONEG 1 /**< Autonegotiated (see link_autoneg). */
+#define ETH_LINK_AUTONEG RTE_ETH_LINK_AUTONEG
#define RTE_ETH_LINK_MAX_STR_LEN 40 /**< Max length of default link string. */
/**
@@ -349,9 +387,12 @@ struct rte_eth_thresh {
/**
* Simple flags are used for rte_eth_conf.rxmode.mq_mode.
*/
-#define ETH_MQ_RX_RSS_FLAG 0x1
-#define ETH_MQ_RX_DCB_FLAG 0x2
-#define ETH_MQ_RX_VMDQ_FLAG 0x4
+#define RTE_ETH_MQ_RX_RSS_FLAG 0x1
+#define ETH_MQ_RX_RSS_FLAG RTE_ETH_MQ_RX_RSS_FLAG
+#define RTE_ETH_MQ_RX_DCB_FLAG 0x2
+#define ETH_MQ_RX_DCB_FLAG RTE_ETH_MQ_RX_DCB_FLAG
+#define RTE_ETH_MQ_RX_VMDQ_FLAG 0x4
+#define ETH_MQ_RX_VMDQ_FLAG RTE_ETH_MQ_RX_VMDQ_FLAG
/**
* A set of values to identify what method is to be used to route
@@ -359,50 +400,49 @@ struct rte_eth_thresh {
*/
enum rte_eth_rx_mq_mode {
/** None of DCB,RSS or VMDQ mode */
- ETH_MQ_RX_NONE = 0,
+ RTE_ETH_MQ_RX_NONE = 0,
/** For RX side, only RSS is on */
- ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
+ RTE_ETH_MQ_RX_RSS = RTE_ETH_MQ_RX_RSS_FLAG,
/** For RX side,only DCB is on. */
- ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
+ RTE_ETH_MQ_RX_DCB = RTE_ETH_MQ_RX_DCB_FLAG,
/** Both DCB and RSS enable */
- ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG,
+ RTE_ETH_MQ_RX_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
/** Only VMDQ, no RSS nor DCB */
- ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_ONLY = RTE_ETH_MQ_RX_VMDQ_FLAG,
/** RSS mode with VMDQ */
- ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG,
/** Use VMDQ+DCB to route traffic to queues */
- ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_DCB = RTE_ETH_MQ_RX_VMDQ_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
/** Enable both VMDQ and DCB in VMDq */
- ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG |
- ETH_MQ_RX_VMDQ_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG |
+ RTE_ETH_MQ_RX_VMDQ_FLAG,
};
-/**
- * for rx mq mode backward compatible
- */
-#define ETH_RSS ETH_MQ_RX_RSS
-#define VMDQ_DCB ETH_MQ_RX_VMDQ_DCB
-#define ETH_DCB_RX ETH_MQ_RX_DCB
+#define ETH_MQ_RX_NONE RTE_ETH_MQ_RX_NONE
+#define ETH_MQ_RX_RSS RTE_ETH_MQ_RX_RSS
+#define ETH_MQ_RX_DCB RTE_ETH_MQ_RX_DCB
+#define ETH_MQ_RX_DCB_RSS RTE_ETH_MQ_RX_DCB_RSS
+#define ETH_MQ_RX_VMDQ_ONLY RTE_ETH_MQ_RX_VMDQ_ONLY
+#define ETH_MQ_RX_VMDQ_RSS RTE_ETH_MQ_RX_VMDQ_RSS
+#define ETH_MQ_RX_VMDQ_DCB RTE_ETH_MQ_RX_VMDQ_DCB
+#define ETH_MQ_RX_VMDQ_DCB_RSS RTE_ETH_MQ_RX_VMDQ_DCB_RSS
/**
* A set of values to identify what method is to be used to transmit
* packets using multi-TCs.
*/
enum rte_eth_tx_mq_mode {
- ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
- ETH_MQ_TX_DCB, /**< For TX side,only DCB is on. */
- ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on. */
- ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
+ RTE_ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
+ RTE_ETH_MQ_TX_DCB, /**< For TX side,only DCB is on. */
+ RTE_ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on. */
+ RTE_ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
};
-
-/**
- * for tx mq mode backward compatible
- */
-#define ETH_DCB_NONE ETH_MQ_TX_NONE
-#define ETH_VMDQ_DCB_TX ETH_MQ_TX_VMDQ_DCB
-#define ETH_DCB_TX ETH_MQ_TX_DCB
+#define ETH_MQ_TX_NONE RTE_ETH_MQ_TX_NONE
+#define ETH_MQ_TX_DCB RTE_ETH_MQ_TX_DCB
+#define ETH_MQ_TX_VMDQ_DCB RTE_ETH_MQ_TX_VMDQ_DCB
+#define ETH_MQ_TX_VMDQ_ONLY RTE_ETH_MQ_TX_VMDQ_ONLY
/**
* A structure used to configure the RX features of an Ethernet port.
@@ -415,7 +455,7 @@ struct rte_eth_rxmode {
uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
/**
- * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+ * Per-port Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
* Only offloads set on rx_offload_capa field on rte_eth_dev_info
* structure are allowed to be set.
*/
@@ -430,12 +470,17 @@ struct rte_eth_rxmode {
* Note that single VLAN is treated the same as inner VLAN.
*/
enum rte_vlan_type {
- ETH_VLAN_TYPE_UNKNOWN = 0,
- ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
- ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
- ETH_VLAN_TYPE_MAX,
+ RTE_ETH_VLAN_TYPE_UNKNOWN = 0,
+ RTE_ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
+ RTE_ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
+ RTE_ETH_VLAN_TYPE_MAX,
};
+#define ETH_VLAN_TYPE_UNKNOWN RTE_ETH_VLAN_TYPE_UNKNOWN
+#define ETH_VLAN_TYPE_INNER RTE_ETH_VLAN_TYPE_INNER
+#define ETH_VLAN_TYPE_OUTER RTE_ETH_VLAN_TYPE_OUTER
+#define ETH_VLAN_TYPE_MAX RTE_ETH_VLAN_TYPE_MAX
+
/**
* A structure used to describe a vlan filter.
* If the bit corresponding to a VID is set, such VID is on.
@@ -506,59 +551,96 @@ struct rte_eth_rss_conf {
* Below macros are defined for RSS offload types, they can be used to
* fill rte_eth_rss_conf.rss_hf or rte_flow_action_rss.types.
*/
-#define ETH_RSS_IPV4 (1ULL << 2)
-#define ETH_RSS_FRAG_IPV4 (1ULL << 3)
-#define ETH_RSS_NONFRAG_IPV4_TCP (1ULL << 4)
-#define ETH_RSS_NONFRAG_IPV4_UDP (1ULL << 5)
-#define ETH_RSS_NONFRAG_IPV4_SCTP (1ULL << 6)
-#define ETH_RSS_NONFRAG_IPV4_OTHER (1ULL << 7)
-#define ETH_RSS_IPV6 (1ULL << 8)
-#define ETH_RSS_FRAG_IPV6 (1ULL << 9)
-#define ETH_RSS_NONFRAG_IPV6_TCP (1ULL << 10)
-#define ETH_RSS_NONFRAG_IPV6_UDP (1ULL << 11)
-#define ETH_RSS_NONFRAG_IPV6_SCTP (1ULL << 12)
-#define ETH_RSS_NONFRAG_IPV6_OTHER (1ULL << 13)
-#define ETH_RSS_L2_PAYLOAD (1ULL << 14)
-#define ETH_RSS_IPV6_EX (1ULL << 15)
-#define ETH_RSS_IPV6_TCP_EX (1ULL << 16)
-#define ETH_RSS_IPV6_UDP_EX (1ULL << 17)
-#define ETH_RSS_PORT (1ULL << 18)
-#define ETH_RSS_VXLAN (1ULL << 19)
-#define ETH_RSS_GENEVE (1ULL << 20)
-#define ETH_RSS_NVGRE (1ULL << 21)
-#define ETH_RSS_GTPU (1ULL << 23)
-#define ETH_RSS_ETH (1ULL << 24)
-#define ETH_RSS_S_VLAN (1ULL << 25)
-#define ETH_RSS_C_VLAN (1ULL << 26)
-#define ETH_RSS_ESP (1ULL << 27)
-#define ETH_RSS_AH (1ULL << 28)
-#define ETH_RSS_L2TPV3 (1ULL << 29)
-#define ETH_RSS_PFCP (1ULL << 30)
-#define ETH_RSS_PPPOE (1ULL << 31)
-#define ETH_RSS_ECPRI (1ULL << 32)
-#define ETH_RSS_MPLS (1ULL << 33)
+#define RTE_ETH_RSS_IPV4 (1ULL << 2)
+#define ETH_RSS_IPV4 RTE_ETH_RSS_IPV4
+#define RTE_ETH_RSS_FRAG_IPV4 (1ULL << 3)
+#define ETH_RSS_FRAG_IPV4 RTE_ETH_RSS_FRAG_IPV4
+#define RTE_ETH_RSS_NONFRAG_IPV4_TCP (1ULL << 4)
+#define ETH_RSS_NONFRAG_IPV4_TCP RTE_ETH_RSS_NONFRAG_IPV4_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV4_UDP (1ULL << 5)
+#define ETH_RSS_NONFRAG_IPV4_UDP RTE_ETH_RSS_NONFRAG_IPV4_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV4_SCTP (1ULL << 6)
+#define ETH_RSS_NONFRAG_IPV4_SCTP RTE_ETH_RSS_NONFRAG_IPV4_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV4_OTHER (1ULL << 7)
+#define ETH_RSS_NONFRAG_IPV4_OTHER RTE_ETH_RSS_NONFRAG_IPV4_OTHER
+#define RTE_ETH_RSS_IPV6 (1ULL << 8)
+#define ETH_RSS_IPV6 RTE_ETH_RSS_IPV6
+#define RTE_ETH_RSS_FRAG_IPV6 (1ULL << 9)
+#define ETH_RSS_FRAG_IPV6 RTE_ETH_RSS_FRAG_IPV6
+#define RTE_ETH_RSS_NONFRAG_IPV6_TCP (1ULL << 10)
+#define ETH_RSS_NONFRAG_IPV6_TCP RTE_ETH_RSS_NONFRAG_IPV6_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV6_UDP (1ULL << 11)
+#define ETH_RSS_NONFRAG_IPV6_UDP RTE_ETH_RSS_NONFRAG_IPV6_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV6_SCTP (1ULL << 12)
+#define ETH_RSS_NONFRAG_IPV6_SCTP RTE_ETH_RSS_NONFRAG_IPV6_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV6_OTHER (1ULL << 13)
+#define ETH_RSS_NONFRAG_IPV6_OTHER RTE_ETH_RSS_NONFRAG_IPV6_OTHER
+#define RTE_ETH_RSS_L2_PAYLOAD (1ULL << 14)
+#define ETH_RSS_L2_PAYLOAD RTE_ETH_RSS_L2_PAYLOAD
+#define RTE_ETH_RSS_IPV6_EX (1ULL << 15)
+#define ETH_RSS_IPV6_EX RTE_ETH_RSS_IPV6_EX
+#define RTE_ETH_RSS_IPV6_TCP_EX (1ULL << 16)
+#define ETH_RSS_IPV6_TCP_EX RTE_ETH_RSS_IPV6_TCP_EX
+#define RTE_ETH_RSS_IPV6_UDP_EX (1ULL << 17)
+#define ETH_RSS_IPV6_UDP_EX RTE_ETH_RSS_IPV6_UDP_EX
+#define RTE_ETH_RSS_PORT (1ULL << 18)
+#define ETH_RSS_PORT RTE_ETH_RSS_PORT
+#define RTE_ETH_RSS_VXLAN (1ULL << 19)
+#define ETH_RSS_VXLAN RTE_ETH_RSS_VXLAN
+#define RTE_ETH_RSS_GENEVE (1ULL << 20)
+#define ETH_RSS_GENEVE RTE_ETH_RSS_GENEVE
+#define RTE_ETH_RSS_NVGRE (1ULL << 21)
+#define ETH_RSS_NVGRE RTE_ETH_RSS_NVGRE
+#define RTE_ETH_RSS_GTPU (1ULL << 23)
+#define ETH_RSS_GTPU RTE_ETH_RSS_GTPU
+#define RTE_ETH_RSS_ETH (1ULL << 24)
+#define ETH_RSS_ETH RTE_ETH_RSS_ETH
+#define RTE_ETH_RSS_S_VLAN (1ULL << 25)
+#define ETH_RSS_S_VLAN RTE_ETH_RSS_S_VLAN
+#define RTE_ETH_RSS_C_VLAN (1ULL << 26)
+#define ETH_RSS_C_VLAN RTE_ETH_RSS_C_VLAN
+#define RTE_ETH_RSS_ESP (1ULL << 27)
+#define ETH_RSS_ESP RTE_ETH_RSS_ESP
+#define RTE_ETH_RSS_AH (1ULL << 28)
+#define ETH_RSS_AH RTE_ETH_RSS_AH
+#define RTE_ETH_RSS_L2TPV3 (1ULL << 29)
+#define ETH_RSS_L2TPV3 RTE_ETH_RSS_L2TPV3
+#define RTE_ETH_RSS_PFCP (1ULL << 30)
+#define ETH_RSS_PFCP RTE_ETH_RSS_PFCP
+#define RTE_ETH_RSS_PPPOE (1ULL << 31)
+#define ETH_RSS_PPPOE RTE_ETH_RSS_PPPOE
+#define RTE_ETH_RSS_ECPRI (1ULL << 32)
+#define ETH_RSS_ECPRI RTE_ETH_RSS_ECPRI
+#define RTE_ETH_RSS_MPLS (1ULL << 33)
+#define ETH_RSS_MPLS RTE_ETH_RSS_MPLS
/*
- * We use the following macros to combine with above ETH_RSS_* for
+ * We use the following macros to combine with above RTE_ETH_RSS_* for
* more specific input set selection. These bits are defined starting
* from the high end of the 64 bits.
- * Note: If we use above ETH_RSS_* without SRC/DST_ONLY, it represents
+ * Note: If we use above RTE_ETH_RSS_* without SRC/DST_ONLY, it represents
* both SRC and DST are taken into account. If SRC_ONLY and DST_ONLY of
* the same level are used simultaneously, it is the same case as none of
* them are added.
*/
-#define ETH_RSS_L3_SRC_ONLY (1ULL << 63)
-#define ETH_RSS_L3_DST_ONLY (1ULL << 62)
-#define ETH_RSS_L4_SRC_ONLY (1ULL << 61)
-#define ETH_RSS_L4_DST_ONLY (1ULL << 60)
-#define ETH_RSS_L2_SRC_ONLY (1ULL << 59)
-#define ETH_RSS_L2_DST_ONLY (1ULL << 58)
+#define RTE_ETH_RSS_L3_SRC_ONLY (1ULL << 63)
+#define ETH_RSS_L3_SRC_ONLY RTE_ETH_RSS_L3_SRC_ONLY
+#define RTE_ETH_RSS_L3_DST_ONLY (1ULL << 62)
+#define ETH_RSS_L3_DST_ONLY RTE_ETH_RSS_L3_DST_ONLY
+#define RTE_ETH_RSS_L4_SRC_ONLY (1ULL << 61)
+#define ETH_RSS_L4_SRC_ONLY RTE_ETH_RSS_L4_SRC_ONLY
+#define RTE_ETH_RSS_L4_DST_ONLY (1ULL << 60)
+#define ETH_RSS_L4_DST_ONLY RTE_ETH_RSS_L4_DST_ONLY
+#define RTE_ETH_RSS_L2_SRC_ONLY (1ULL << 59)
+#define ETH_RSS_L2_SRC_ONLY RTE_ETH_RSS_L2_SRC_ONLY
+#define RTE_ETH_RSS_L2_DST_ONLY (1ULL << 58)
+#define ETH_RSS_L2_DST_ONLY RTE_ETH_RSS_L2_DST_ONLY
/*
* Only select IPV6 address prefix as RSS input set according to
* https://tools.ietf.org/html/rfc6052
- * Must be combined with ETH_RSS_IPV6, ETH_RSS_NONFRAG_IPV6_UDP,
- * ETH_RSS_NONFRAG_IPV6_TCP, ETH_RSS_NONFRAG_IPV6_SCTP.
+ * Must be combined with RTE_ETH_RSS_IPV6, RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ * RTE_ETH_RSS_NONFRAG_IPV6_TCP, RTE_ETH_RSS_NONFRAG_IPV6_SCTP.
*/
#define RTE_ETH_RSS_L3_PRE32 (1ULL << 57)
#define RTE_ETH_RSS_L3_PRE40 (1ULL << 56)
@@ -580,22 +662,27 @@ struct rte_eth_rss_conf {
* It basically stands for the innermost encapsulation level RSS
* can be performed on according to PMD and device capabilities.
*/
-#define ETH_RSS_LEVEL_PMD_DEFAULT (0ULL << 50)
+#define RTE_ETH_RSS_LEVEL_PMD_DEFAULT (0ULL << 50)
+#define ETH_RSS_LEVEL_PMD_DEFAULT RTE_ETH_RSS_LEVEL_PMD_DEFAULT
/**
* level 1, requests RSS to be performed on the outermost packet
* encapsulation level.
*/
-#define ETH_RSS_LEVEL_OUTERMOST (1ULL << 50)
+#define RTE_ETH_RSS_LEVEL_OUTERMOST (1ULL << 50)
+#define ETH_RSS_LEVEL_OUTERMOST RTE_ETH_RSS_LEVEL_OUTERMOST
/**
* level 2, requests RSS to be performed on the specified inner packet
* encapsulation level, from outermost to innermost (lower to higher values).
*/
-#define ETH_RSS_LEVEL_INNERMOST (2ULL << 50)
-#define ETH_RSS_LEVEL_MASK (3ULL << 50)
+#define RTE_ETH_RSS_LEVEL_INNERMOST (2ULL << 50)
+#define ETH_RSS_LEVEL_INNERMOST RTE_ETH_RSS_LEVEL_INNERMOST
+#define RTE_ETH_RSS_LEVEL_MASK (3ULL << 50)
+#define ETH_RSS_LEVEL_MASK RTE_ETH_RSS_LEVEL_MASK
-#define ETH_RSS_LEVEL(rss_hf) ((rss_hf & ETH_RSS_LEVEL_MASK) >> 50)
+#define RTE_ETH_RSS_LEVEL(rss_hf) ((rss_hf & RTE_ETH_RSS_LEVEL_MASK) >> 50)
+#define ETH_RSS_LEVEL(rss_hf) RTE_ETH_RSS_LEVEL(rss_hf)
/**
* For input set change of hash filter, if SRC_ONLY and DST_ONLY of
@@ -610,222 +697,286 @@ struct rte_eth_rss_conf {
static inline uint64_t
rte_eth_rss_hf_refine(uint64_t rss_hf)
{
- if ((rss_hf & ETH_RSS_L3_SRC_ONLY) && (rss_hf & ETH_RSS_L3_DST_ONLY))
- rss_hf &= ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+ if ((rss_hf & RTE_ETH_RSS_L3_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L3_DST_ONLY))
+ rss_hf &= ~(RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
- if ((rss_hf & ETH_RSS_L4_SRC_ONLY) && (rss_hf & ETH_RSS_L4_DST_ONLY))
- rss_hf &= ~(ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+ if ((rss_hf & RTE_ETH_RSS_L4_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L4_DST_ONLY))
+ rss_hf &= ~(RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
return rss_hf;
}
-#define ETH_RSS_IPV6_PRE32 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE32 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32 RTE_ETH_RSS_IPV6_PRE32
-#define ETH_RSS_IPV6_PRE40 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE40 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40 RTE_ETH_RSS_IPV6_PRE40
-#define ETH_RSS_IPV6_PRE48 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE48 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48 RTE_ETH_RSS_IPV6_PRE48
-#define ETH_RSS_IPV6_PRE56 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE56 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56 RTE_ETH_RSS_IPV6_PRE56
-#define ETH_RSS_IPV6_PRE64 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE64 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64 RTE_ETH_RSS_IPV6_PRE64
-#define ETH_RSS_IPV6_PRE96 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE96 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96 RTE_ETH_RSS_IPV6_PRE96
-#define ETH_RSS_IPV6_PRE32_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE32_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_UDP RTE_ETH_RSS_IPV6_PRE32_UDP
-#define ETH_RSS_IPV6_PRE40_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE40_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_UDP RTE_ETH_RSS_IPV6_PRE40_UDP
-#define ETH_RSS_IPV6_PRE48_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE48_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_UDP RTE_ETH_RSS_IPV6_PRE48_UDP
-#define ETH_RSS_IPV6_PRE56_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE56_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_UDP RTE_ETH_RSS_IPV6_PRE56_UDP
-#define ETH_RSS_IPV6_PRE64_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE64_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_UDP RTE_ETH_RSS_IPV6_PRE64_UDP
-#define ETH_RSS_IPV6_PRE96_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE96_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_UDP RTE_ETH_RSS_IPV6_PRE96_UDP
-#define ETH_RSS_IPV6_PRE32_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE32_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_TCP RTE_ETH_RSS_IPV6_PRE32_TCP
-#define ETH_RSS_IPV6_PRE40_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE40_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_TCP RTE_ETH_RSS_IPV6_PRE40_TCP
-#define ETH_RSS_IPV6_PRE48_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE48_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_TCP RTE_ETH_RSS_IPV6_PRE48_TCP
-#define ETH_RSS_IPV6_PRE56_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE56_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_TCP RTE_ETH_RSS_IPV6_PRE56_TCP
-#define ETH_RSS_IPV6_PRE64_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE64_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_TCP RTE_ETH_RSS_IPV6_PRE64_TCP
-#define ETH_RSS_IPV6_PRE96_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE96_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_TCP RTE_ETH_RSS_IPV6_PRE96_TCP
-#define ETH_RSS_IPV6_PRE32_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE32_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_SCTP RTE_ETH_RSS_IPV6_PRE32_SCTP
-#define ETH_RSS_IPV6_PRE40_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE40_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_SCTP RTE_ETH_RSS_IPV6_PRE40_SCTP
-#define ETH_RSS_IPV6_PRE48_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE48_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_SCTP RTE_ETH_RSS_IPV6_PRE48_SCTP
-#define ETH_RSS_IPV6_PRE56_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE56_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_SCTP RTE_ETH_RSS_IPV6_PRE56_SCTP
-#define ETH_RSS_IPV6_PRE64_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE64_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_SCTP RTE_ETH_RSS_IPV6_PRE64_SCTP
-#define ETH_RSS_IPV6_PRE96_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE96_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE96)
-
-#define ETH_RSS_IP ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_IPV6_EX)
-
-#define ETH_RSS_UDP ( \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_UDP_EX)
-
-#define ETH_RSS_TCP ( \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_IPV6_TCP_EX)
-
-#define ETH_RSS_SCTP ( \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
-
-#define ETH_RSS_TUNNEL ( \
- ETH_RSS_VXLAN | \
- ETH_RSS_GENEVE | \
- ETH_RSS_NVGRE)
-
-#define ETH_RSS_VLAN ( \
- ETH_RSS_S_VLAN | \
- ETH_RSS_C_VLAN)
+#define ETH_RSS_IPV6_PRE96_SCTP RTE_ETH_RSS_IPV6_PRE96_SCTP
+
+#define RTE_ETH_RSS_IP ( \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_IPV6_EX)
+#define ETH_RSS_IP RTE_ETH_RSS_IP
+
+#define RTE_ETH_RSS_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
+#define ETH_RSS_UDP RTE_ETH_RSS_UDP
+
+#define RTE_ETH_RSS_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_IPV6_TCP_EX)
+#define ETH_RSS_TCP RTE_ETH_RSS_TCP
+
+#define RTE_ETH_RSS_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+#define ETH_RSS_SCTP RTE_ETH_RSS_SCTP
+
+#define RTE_ETH_RSS_TUNNEL ( \
+ RTE_ETH_RSS_VXLAN | \
+ RTE_ETH_RSS_GENEVE | \
+ RTE_ETH_RSS_NVGRE)
+#define ETH_RSS_TUNNEL RTE_ETH_RSS_TUNNEL
+
+#define RTE_ETH_RSS_VLAN ( \
+ RTE_ETH_RSS_S_VLAN | \
+ RTE_ETH_RSS_C_VLAN)
+#define ETH_RSS_VLAN RTE_ETH_RSS_VLAN
/**< Mask of valid RSS hash protocols */
-#define ETH_RSS_PROTO_MASK ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L2_PAYLOAD | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX | \
- ETH_RSS_PORT | \
- ETH_RSS_VXLAN | \
- ETH_RSS_GENEVE | \
- ETH_RSS_NVGRE | \
- ETH_RSS_MPLS)
+#define RTE_ETH_RSS_PROTO_MASK ( \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L2_PAYLOAD | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX | \
+ RTE_ETH_RSS_PORT | \
+ RTE_ETH_RSS_VXLAN | \
+ RTE_ETH_RSS_GENEVE | \
+ RTE_ETH_RSS_NVGRE | \
+ RTE_ETH_RSS_MPLS)
+#define ETH_RSS_PROTO_MASK RTE_ETH_RSS_PROTO_MASK
/*
* Definitions used for redirection table entry size.
* Some RSS RETA sizes may not be supported by some drivers, check the
* documentation or the description of relevant functions for more details.
*/
-#define ETH_RSS_RETA_SIZE_64 64
-#define ETH_RSS_RETA_SIZE_128 128
-#define ETH_RSS_RETA_SIZE_256 256
-#define ETH_RSS_RETA_SIZE_512 512
-#define RTE_RETA_GROUP_SIZE 64
+#define RTE_ETH_RSS_RETA_SIZE_64 64
+#define ETH_RSS_RETA_SIZE_64 RTE_ETH_RSS_RETA_SIZE_64
+#define RTE_ETH_RSS_RETA_SIZE_128 128
+#define ETH_RSS_RETA_SIZE_128 RTE_ETH_RSS_RETA_SIZE_128
+#define RTE_ETH_RSS_RETA_SIZE_256 256
+#define ETH_RSS_RETA_SIZE_256 RTE_ETH_RSS_RETA_SIZE_256
+#define RTE_ETH_RSS_RETA_SIZE_512 512
+#define ETH_RSS_RETA_SIZE_512 RTE_ETH_RSS_RETA_SIZE_512
+#define RTE_ETH_RETA_GROUP_SIZE 64
+#define RTE_RETA_GROUP_SIZE RTE_ETH_RETA_GROUP_SIZE
/* Definitions used for VMDQ and DCB functionality */
-#define ETH_VMDQ_MAX_VLAN_FILTERS 64 /**< Maximum nb. of VMDQ vlan filters. */
-#define ETH_DCB_NUM_USER_PRIORITIES 8 /**< Maximum nb. of DCB priorities. */
-#define ETH_VMDQ_DCB_NUM_QUEUES 128 /**< Maximum nb. of VMDQ DCB queues. */
-#define ETH_DCB_NUM_QUEUES 128 /**< Maximum nb. of DCB queues. */
+#define RTE_ETH_VMDQ_MAX_VLAN_FILTERS 64 /**< Maximum nb. of VMDQ vlan filters. */
+#define ETH_VMDQ_MAX_VLAN_FILTERS RTE_ETH_VMDQ_MAX_VLAN_FILTERS
+#define RTE_ETH_DCB_NUM_USER_PRIORITIES 8 /**< Maximum nb. of DCB priorities. */
+#define ETH_DCB_NUM_USER_PRIORITIES RTE_ETH_DCB_NUM_USER_PRIORITIES
+#define RTE_ETH_VMDQ_DCB_NUM_QUEUES 128 /**< Maximum nb. of VMDQ DCB queues. */
+#define ETH_VMDQ_DCB_NUM_QUEUES RTE_ETH_VMDQ_DCB_NUM_QUEUES
+#define RTE_ETH_DCB_NUM_QUEUES 128 /**< Maximum nb. of DCB queues. */
+#define ETH_DCB_NUM_QUEUES RTE_ETH_DCB_NUM_QUEUES
/* DCB capability defines */
-#define ETH_DCB_PG_SUPPORT 0x00000001 /**< Priority Group(ETS) support. */
-#define ETH_DCB_PFC_SUPPORT 0x00000002 /**< Priority Flow Control support. */
+#define RTE_ETH_DCB_PG_SUPPORT 0x00000001 /**< Priority Group(ETS) support. */
+#define ETH_DCB_PG_SUPPORT RTE_ETH_DCB_PG_SUPPORT
+#define RTE_ETH_DCB_PFC_SUPPORT 0x00000002 /**< Priority Flow Control support. */
+#define ETH_DCB_PFC_SUPPORT RTE_ETH_DCB_PFC_SUPPORT
/* Definitions used for VLAN Offload functionality */
-#define ETH_VLAN_STRIP_OFFLOAD 0x0001 /**< VLAN Strip On/Off */
-#define ETH_VLAN_FILTER_OFFLOAD 0x0002 /**< VLAN Filter On/Off */
-#define ETH_VLAN_EXTEND_OFFLOAD 0x0004 /**< VLAN Extend On/Off */
-#define ETH_QINQ_STRIP_OFFLOAD 0x0008 /**< QINQ Strip On/Off */
+#define RTE_ETH_VLAN_STRIP_OFFLOAD 0x0001 /**< VLAN Strip On/Off */
+#define ETH_VLAN_STRIP_OFFLOAD RTE_ETH_VLAN_STRIP_OFFLOAD
+#define RTE_ETH_VLAN_FILTER_OFFLOAD 0x0002 /**< VLAN Filter On/Off */
+#define ETH_VLAN_FILTER_OFFLOAD RTE_ETH_VLAN_FILTER_OFFLOAD
+#define RTE_ETH_VLAN_EXTEND_OFFLOAD 0x0004 /**< VLAN Extend On/Off */
+#define ETH_VLAN_EXTEND_OFFLOAD RTE_ETH_VLAN_EXTEND_OFFLOAD
+#define RTE_ETH_QINQ_STRIP_OFFLOAD 0x0008 /**< QINQ Strip On/Off */
+#define ETH_QINQ_STRIP_OFFLOAD RTE_ETH_QINQ_STRIP_OFFLOAD
/* Definitions used for mask VLAN setting */
-#define ETH_VLAN_STRIP_MASK 0x0001 /**< VLAN Strip setting mask */
-#define ETH_VLAN_FILTER_MASK 0x0002 /**< VLAN Filter setting mask*/
-#define ETH_VLAN_EXTEND_MASK 0x0004 /**< VLAN Extend setting mask*/
-#define ETH_QINQ_STRIP_MASK 0x0008 /**< QINQ Strip setting mask */
-#define ETH_VLAN_ID_MAX 0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define RTE_ETH_VLAN_STRIP_MASK 0x0001 /**< VLAN Strip setting mask */
+#define ETH_VLAN_STRIP_MASK RTE_ETH_VLAN_STRIP_MASK
+#define RTE_ETH_VLAN_FILTER_MASK 0x0002 /**< VLAN Filter setting mask*/
+#define ETH_VLAN_FILTER_MASK RTE_ETH_VLAN_FILTER_MASK
+#define RTE_ETH_VLAN_EXTEND_MASK 0x0004 /**< VLAN Extend setting mask*/
+#define ETH_VLAN_EXTEND_MASK RTE_ETH_VLAN_EXTEND_MASK
+#define RTE_ETH_QINQ_STRIP_MASK 0x0008 /**< QINQ Strip setting mask */
+#define ETH_QINQ_STRIP_MASK RTE_ETH_QINQ_STRIP_MASK
+#define RTE_ETH_VLAN_ID_MAX 0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define ETH_VLAN_ID_MAX RTE_ETH_VLAN_ID_MAX
/* Definitions used for receive MAC address */
-#define ETH_NUM_RECEIVE_MAC_ADDR 128 /**< Maximum nb. of receive mac addr. */
+#define RTE_ETH_NUM_RECEIVE_MAC_ADDR 128 /**< Maximum nb. of receive mac addr. */
+#define ETH_NUM_RECEIVE_MAC_ADDR RTE_ETH_NUM_RECEIVE_MAC_ADDR
/* Definitions used for unicast hash */
-#define ETH_VMDQ_NUM_UC_HASH_ARRAY 128 /**< Maximum nb. of UC hash array. */
+#define RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY 128 /**< Maximum nb. of UC hash array. */
+#define ETH_VMDQ_NUM_UC_HASH_ARRAY RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY
/* Definitions used for VMDQ pool rx mode setting */
-#define ETH_VMDQ_ACCEPT_UNTAG 0x0001 /**< accept untagged packets. */
-#define ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
-#define ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
-#define ETH_VMDQ_ACCEPT_BROADCAST 0x0008 /**< accept broadcast packets. */
-#define ETH_VMDQ_ACCEPT_MULTICAST 0x0010 /**< multicast promiscuous. */
+#define RTE_ETH_VMDQ_ACCEPT_UNTAG 0x0001 /**< accept untagged packets. */
+#define ETH_VMDQ_ACCEPT_UNTAG RTE_ETH_VMDQ_ACCEPT_UNTAG
+#define RTE_ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
+#define ETH_VMDQ_ACCEPT_HASH_MC RTE_ETH_VMDQ_ACCEPT_HASH_MC
+#define RTE_ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
+#define ETH_VMDQ_ACCEPT_HASH_UC RTE_ETH_VMDQ_ACCEPT_HASH_UC
+#define RTE_ETH_VMDQ_ACCEPT_BROADCAST 0x0008 /**< accept broadcast packets. */
+#define ETH_VMDQ_ACCEPT_BROADCAST RTE_ETH_VMDQ_ACCEPT_BROADCAST
+#define RTE_ETH_VMDQ_ACCEPT_MULTICAST 0x0010 /**< multicast promiscuous. */
+#define ETH_VMDQ_ACCEPT_MULTICAST RTE_ETH_VMDQ_ACCEPT_MULTICAST
/** Maximum nb. of vlan per mirror rule */
-#define ETH_MIRROR_MAX_VLANS 64
+#define RTE_ETH_MIRROR_MAX_VLANS 64
+#define ETH_MIRROR_MAX_VLANS RTE_ETH_MIRROR_MAX_VLANS
-#define ETH_MIRROR_VIRTUAL_POOL_UP 0x01 /**< Virtual Pool uplink Mirroring. */
-#define ETH_MIRROR_UPLINK_PORT 0x02 /**< Uplink Port Mirroring. */
-#define ETH_MIRROR_DOWNLINK_PORT 0x04 /**< Downlink Port Mirroring. */
-#define ETH_MIRROR_VLAN 0x08 /**< VLAN Mirroring. */
-#define ETH_MIRROR_VIRTUAL_POOL_DOWN 0x10 /**< Virtual Pool downlink Mirroring. */
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_UP 0x01 /**< Virtual Pool uplink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_UP RTE_ETH_MIRROR_VIRTUAL_POOL_UP
+#define RTE_ETH_MIRROR_UPLINK_PORT 0x02 /**< Uplink Port Mirroring. */
+#define ETH_MIRROR_UPLINK_PORT RTE_ETH_MIRROR_UPLINK_PORT
+#define RTE_ETH_MIRROR_DOWNLINK_PORT 0x04 /**< Downlink Port Mirroring. */
+#define ETH_MIRROR_DOWNLINK_PORT RTE_ETH_MIRROR_DOWNLINK_PORT
+#define RTE_ETH_MIRROR_VLAN 0x08 /**< VLAN Mirroring. */
+#define ETH_MIRROR_VLAN RTE_ETH_MIRROR_VLAN
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN 0x10 /**< Virtual Pool downlink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_DOWN RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN
/**
* A structure used to configure VLAN traffic mirror of an Ethernet port.
@@ -833,7 +984,7 @@ rte_eth_rss_hf_refine(uint64_t rss_hf)
struct rte_eth_vlan_mirror {
uint64_t vlan_mask; /**< mask for valid VLAN ID. */
/** VLAN ID list for vlan mirroring. */
- uint16_t vlan_id[ETH_MIRROR_MAX_VLANS];
+ uint16_t vlan_id[RTE_ETH_MIRROR_MAX_VLANS];
};
/**
@@ -856,7 +1007,7 @@ struct rte_eth_mirror_conf {
struct rte_eth_rss_reta_entry64 {
uint64_t mask;
/**< Mask bits indicate which entries need to be updated/queried. */
- uint16_t reta[RTE_RETA_GROUP_SIZE];
+ uint16_t reta[RTE_ETH_RETA_GROUP_SIZE];
/**< Group of 64 redirection table entries. */
};
@@ -865,38 +1016,44 @@ struct rte_eth_rss_reta_entry64 {
* in DCB configurations
*/
enum rte_eth_nb_tcs {
- ETH_4_TCS = 4, /**< 4 TCs with DCB. */
- ETH_8_TCS = 8 /**< 8 TCs with DCB. */
+ RTE_ETH_4_TCS = 4, /**< 4 TCs with DCB. */
+ RTE_ETH_8_TCS = 8 /**< 8 TCs with DCB. */
};
+#define ETH_4_TCS RTE_ETH_4_TCS
+#define ETH_8_TCS RTE_ETH_8_TCS
/**
* This enum indicates the possible number of queue pools
* in VMDQ configurations.
*/
enum rte_eth_nb_pools {
- ETH_8_POOLS = 8, /**< 8 VMDq pools. */
- ETH_16_POOLS = 16, /**< 16 VMDq pools. */
- ETH_32_POOLS = 32, /**< 32 VMDq pools. */
- ETH_64_POOLS = 64 /**< 64 VMDq pools. */
+ RTE_ETH_8_POOLS = 8, /**< 8 VMDq pools. */
+ RTE_ETH_16_POOLS = 16, /**< 16 VMDq pools. */
+ RTE_ETH_32_POOLS = 32, /**< 32 VMDq pools. */
+ RTE_ETH_64_POOLS = 64 /**< 64 VMDq pools. */
};
+#define ETH_8_POOLS RTE_ETH_8_POOLS
+#define ETH_16_POOLS RTE_ETH_16_POOLS
+#define ETH_32_POOLS RTE_ETH_32_POOLS
+#define ETH_64_POOLS RTE_ETH_64_POOLS
/* This structure may be extended in future. */
struct rte_eth_dcb_rx_conf {
enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs */
/** Traffic class each UP mapped to. */
- uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+ uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
};
struct rte_eth_vmdq_dcb_tx_conf {
enum rte_eth_nb_pools nb_queue_pools; /**< With DCB, 16 or 32 pools. */
/** Traffic class each UP mapped to. */
- uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+ uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
};
struct rte_eth_dcb_tx_conf {
enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs. */
/** Traffic class each UP mapped to. */
- uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+ uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
};
struct rte_eth_vmdq_tx_conf {
@@ -922,8 +1079,8 @@ struct rte_eth_vmdq_dcb_conf {
struct {
uint16_t vlan_id; /**< The vlan id of the received frame */
uint64_t pools; /**< Bitmask of pools for packet rx */
- } pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
- uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+ } pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
+ uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
/**< Selects a queue in a pool */
};
@@ -934,7 +1091,7 @@ struct rte_eth_vmdq_dcb_conf {
* Using this feature, packets are routed to a pool of queues. By default,
* the pool selection is based on the MAC address, the vlan id in the
* vlan tag as specified in the pool_map array.
- * Passing the ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
+ * Passing the RTE_ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
* selection using only the MAC address. MAC address to pool mapping is done
* using the rte_eth_dev_mac_addr_add function, with the pool parameter
* corresponding to the pool id.
@@ -955,7 +1112,7 @@ struct rte_eth_vmdq_rx_conf {
struct {
uint16_t vlan_id; /**< The vlan id of the received frame */
uint64_t pools; /**< Bitmask of pools for packet rx */
- } pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
+ } pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
};
/**
@@ -964,7 +1121,7 @@ struct rte_eth_vmdq_rx_conf {
struct rte_eth_txmode {
enum rte_eth_tx_mq_mode mq_mode; /**< TX multi-queues mode. */
/**
- * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+ * Per-port Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags.
* Only offloads set on tx_offload_capa field on rte_eth_dev_info
* structure are allowed to be set.
*/
@@ -1048,7 +1205,7 @@ struct rte_eth_rxconf {
uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
uint16_t rx_nseg; /**< Number of descriptions in rx_seg array. */
/**
- * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+ * Per-queue Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
* Only offloads set on rx_queue_offload_capa or rx_offload_capa
* fields on rte_eth_dev_info structure are allowed to be set.
*/
@@ -1077,7 +1234,7 @@ struct rte_eth_txconf {
uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
/**
- * Per-queue Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+ * Per-queue Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags.
* Only offloads set on tx_queue_offload_capa or tx_offload_capa
* fields on rte_eth_dev_info structure are allowed to be set.
*/
@@ -1188,12 +1345,17 @@ struct rte_eth_desc_lim {
* This enum indicates the flow control mode
*/
enum rte_eth_fc_mode {
- RTE_FC_NONE = 0, /**< Disable flow control. */
- RTE_FC_RX_PAUSE, /**< RX pause frame, enable flowctrl on TX side. */
- RTE_FC_TX_PAUSE, /**< TX pause frame, enable flowctrl on RX side. */
- RTE_FC_FULL /**< Enable flow control on both side. */
+ RTE_ETH_FC_NONE = 0, /**< Disable flow control. */
+ RTE_ETH_FC_RX_PAUSE, /**< RX pause frame, enable flowctrl on TX side. */
+ RTE_ETH_FC_TX_PAUSE, /**< TX pause frame, enable flowctrl on RX side. */
+ RTE_ETH_FC_FULL /**< Enable flow control on both side. */
};
+#define RTE_FC_NONE RTE_ETH_FC_NONE
+#define RTE_FC_RX_PAUSE RTE_ETH_FC_RX_PAUSE
+#define RTE_FC_TX_PAUSE RTE_ETH_FC_TX_PAUSE
+#define RTE_FC_FULL RTE_ETH_FC_FULL
+
/**
* A structure used to configure Ethernet flow control parameter.
* These parameters will be configured into the register of the NIC.
@@ -1224,18 +1386,29 @@ struct rte_eth_pfc_conf {
* @see rte_eth_udp_tunnel
*/
enum rte_eth_tunnel_type {
- RTE_TUNNEL_TYPE_NONE = 0,
- RTE_TUNNEL_TYPE_VXLAN,
- RTE_TUNNEL_TYPE_GENEVE,
- RTE_TUNNEL_TYPE_TEREDO,
- RTE_TUNNEL_TYPE_NVGRE,
- RTE_TUNNEL_TYPE_IP_IN_GRE,
- RTE_L2_TUNNEL_TYPE_E_TAG,
- RTE_TUNNEL_TYPE_VXLAN_GPE,
- RTE_TUNNEL_TYPE_ECPRI,
- RTE_TUNNEL_TYPE_MAX,
+ RTE_ETH_TUNNEL_TYPE_NONE = 0,
+ RTE_ETH_TUNNEL_TYPE_VXLAN,
+ RTE_ETH_TUNNEL_TYPE_GENEVE,
+ RTE_ETH_TUNNEL_TYPE_TEREDO,
+ RTE_ETH_TUNNEL_TYPE_NVGRE,
+ RTE_ETH_TUNNEL_TYPE_IP_IN_GRE,
+ RTE_ETH_L2_TUNNEL_TYPE_E_TAG,
+ RTE_ETH_TUNNEL_TYPE_VXLAN_GPE,
+ RTE_ETH_TUNNEL_TYPE_ECPRI,
+ RTE_ETH_TUNNEL_TYPE_MAX,
};
+#define RTE_TUNNEL_TYPE_NONE RTE_ETH_TUNNEL_TYPE_NONE
+#define RTE_TUNNEL_TYPE_VXLAN RTE_ETH_TUNNEL_TYPE_VXLAN
+#define RTE_TUNNEL_TYPE_GENEVE RTE_ETH_TUNNEL_TYPE_GENEVE
+#define RTE_TUNNEL_TYPE_TEREDO RTE_ETH_TUNNEL_TYPE_TEREDO
+#define RTE_TUNNEL_TYPE_NVGRE RTE_ETH_TUNNEL_TYPE_NVGRE
+#define RTE_TUNNEL_TYPE_IP_IN_GRE RTE_ETH_TUNNEL_TYPE_IP_IN_GRE
+#define RTE_L2_TUNNEL_TYPE_E_TAG RTE_ETH_L2_TUNNEL_TYPE_E_TAG
+#define RTE_TUNNEL_TYPE_VXLAN_GPE RTE_ETH_TUNNEL_TYPE_VXLAN_GPE
+#define RTE_TUNNEL_TYPE_ECPRI RTE_ETH_TUNNEL_TYPE_ECPRI
+#define RTE_TUNNEL_TYPE_MAX RTE_ETH_TUNNEL_TYPE_MAX
+
/* Deprecated API file for rte_eth_dev_filter_* functions */
#include "rte_eth_ctrl.h"
@@ -1243,11 +1416,16 @@ enum rte_eth_tunnel_type {
* Memory space that can be configured to store Flow Director filters
* in the board memory.
*/
-enum rte_fdir_pballoc_type {
- RTE_FDIR_PBALLOC_64K = 0, /**< 64k. */
- RTE_FDIR_PBALLOC_128K, /**< 128k. */
- RTE_FDIR_PBALLOC_256K, /**< 256k. */
+enum rte_eth_fdir_pballoc_type {
+ RTE_ETH_FDIR_PBALLOC_64K = 0, /**< 64k. */
+ RTE_ETH_FDIR_PBALLOC_128K, /**< 128k. */
+ RTE_ETH_FDIR_PBALLOC_256K, /**< 256k. */
};
+#define rte_fdir_pballoc_type rte_eth_fdir_pballoc_type
+
+#define RTE_FDIR_PBALLOC_64K RTE_ETH_FDIR_PBALLOC_64K
+#define RTE_FDIR_PBALLOC_128K RTE_ETH_FDIR_PBALLOC_128K
+#define RTE_FDIR_PBALLOC_256K RTE_ETH_FDIR_PBALLOC_256K
/**
* Select report mode of FDIR hash information in RX descriptors.
@@ -1264,9 +1442,9 @@ enum rte_fdir_status_mode {
*
* If mode is RTE_FDIR_MODE_NONE, the pballoc value is ignored.
*/
-struct rte_fdir_conf {
+struct rte_eth_fdir_conf {
enum rte_fdir_mode mode; /**< Flow Director mode. */
- enum rte_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
+ enum rte_eth_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
enum rte_fdir_status_mode status; /**< How to report FDIR hash. */
/** RX queue of packets matching a "drop" filter in perfect mode. */
uint8_t drop_queue;
@@ -1275,6 +1453,8 @@ struct rte_fdir_conf {
/**< Flex payload configuration. */
};
+#define rte_fdir_conf rte_eth_fdir_conf
+
/**
* UDP tunneling configuration.
*
@@ -1292,7 +1472,7 @@ struct rte_eth_udp_tunnel {
/**
* A structure used to enable/disable specific device interrupts.
*/
-struct rte_intr_conf {
+struct rte_eth_intr_conf {
/** enable/disable lsc interrupt. 0 (default) - disable, 1 enable */
uint32_t lsc:1;
/** enable/disable rxq interrupt. 0 (default) - disable, 1 enable */
@@ -1301,18 +1481,20 @@ struct rte_intr_conf {
uint32_t rmv:1;
};
+#define rte_intr_conf rte_eth_intr_conf
+
/**
* A structure used to configure an Ethernet port.
* Depending upon the RX multi-queue mode, extra advanced
* configuration settings may be needed.
*/
struct rte_eth_conf {
- uint32_t link_speeds; /**< bitmap of ETH_LINK_SPEED_XXX of speeds to be
- used. ETH_LINK_SPEED_FIXED disables link
+ uint32_t link_speeds; /**< bitmap of RTE_ETH_LINK_SPEED_XXX of speeds to be
+ used. RTE_ETH_LINK_SPEED_FIXED disables link
autonegotiation, and a unique speed shall be
set. Otherwise, the bitmap defines the set of
speeds to be advertised. If the special value
- ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
+ RTE_ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
supported are advertised. */
struct rte_eth_rxmode rxmode; /**< Port RX configuration. */
struct rte_eth_txmode txmode; /**< Port TX configuration. */
@@ -1338,49 +1520,72 @@ struct rte_eth_conf {
struct rte_eth_vmdq_tx_conf vmdq_tx_conf;
/**< Port vmdq TX configuration. */
} tx_adv_conf; /**< Port TX DCB configuration (union). */
- /** Currently,Priority Flow Control(PFC) are supported,if DCB with PFC
- is needed,and the variable must be set ETH_DCB_PFC_SUPPORT. */
+ /**
+ * Currently,Priority Flow Control(PFC) are supported,if DCB with PFC
+ * is needed,and the variable must be set RTE_ETH_DCB_PFC_SUPPORT.
+ */
uint32_t dcb_capability_en;
- struct rte_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
- struct rte_intr_conf intr_conf; /**< Interrupt mode configuration. */
+ struct rte_eth_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
+ struct rte_eth_intr_conf intr_conf; /**< Interrupt mode configuration. */
};
/**
* RX offload capabilities of a device.
*/
-#define DEV_RX_OFFLOAD_VLAN_STRIP 0x00000001
-#define DEV_RX_OFFLOAD_IPV4_CKSUM 0x00000002
-#define DEV_RX_OFFLOAD_UDP_CKSUM 0x00000004
-#define DEV_RX_OFFLOAD_TCP_CKSUM 0x00000008
-#define DEV_RX_OFFLOAD_TCP_LRO 0x00000010
-#define DEV_RX_OFFLOAD_QINQ_STRIP 0x00000020
-#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
-#define DEV_RX_OFFLOAD_MACSEC_STRIP 0x00000080
-#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100
-#define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
-#define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
-#define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
-#define DEV_RX_OFFLOAD_SCATTER 0x00002000
+#define RTE_ETH_RX_OFFLOAD_VLAN_STRIP 0x00000001
+#define DEV_RX_OFFLOAD_VLAN_STRIP RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+#define RTE_ETH_RX_OFFLOAD_IPV4_CKSUM 0x00000002
+#define DEV_RX_OFFLOAD_IPV4_CKSUM RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_UDP_CKSUM 0x00000004
+#define DEV_RX_OFFLOAD_UDP_CKSUM RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_CKSUM 0x00000008
+#define DEV_RX_OFFLOAD_TCP_CKSUM RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_LRO 0x00000010
+#define DEV_RX_OFFLOAD_TCP_LRO RTE_ETH_RX_OFFLOAD_TCP_LRO
+#define RTE_ETH_RX_OFFLOAD_QINQ_STRIP 0x00000020
+#define DEV_RX_OFFLOAD_QINQ_STRIP RTE_ETH_RX_OFFLOAD_QINQ_STRIP
+#define RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
+#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_MACSEC_STRIP 0x00000080
+#define DEV_RX_OFFLOAD_MACSEC_STRIP RTE_ETH_RX_OFFLOAD_MACSEC_STRIP
+#define RTE_ETH_RX_OFFLOAD_HEADER_SPLIT 0x00000100
+#define DEV_RX_OFFLOAD_HEADER_SPLIT RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
+#define RTE_ETH_RX_OFFLOAD_VLAN_FILTER 0x00000200
+#define DEV_RX_OFFLOAD_VLAN_FILTER RTE_ETH_RX_OFFLOAD_VLAN_FILTER
+#define RTE_ETH_RX_OFFLOAD_VLAN_EXTEND 0x00000400
+#define DEV_RX_OFFLOAD_VLAN_EXTEND RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
+#define RTE_ETH_RX_OFFLOAD_JUMBO_FRAME 0x00000800
+#define DEV_RX_OFFLOAD_JUMBO_FRAME RTE_ETH_RX_OFFLOAD_JUMBO_FRAME
+#define RTE_ETH_RX_OFFLOAD_SCATTER 0x00002000
+#define DEV_RX_OFFLOAD_SCATTER RTE_ETH_RX_OFFLOAD_SCATTER
/**
* Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
* and RTE_MBUF_DYNFLAG_RX_TIMESTAMP_NAME is set in ol_flags.
* The mbuf field and flag are registered when the offload is configured.
*/
-#define DEV_RX_OFFLOAD_TIMESTAMP 0x00004000
-#define DEV_RX_OFFLOAD_SECURITY 0x00008000
-#define DEV_RX_OFFLOAD_KEEP_CRC 0x00010000
-#define DEV_RX_OFFLOAD_SCTP_CKSUM 0x00020000
-#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM 0x00040000
-#define DEV_RX_OFFLOAD_RSS_HASH 0x00080000
+#define RTE_ETH_RX_OFFLOAD_TIMESTAMP 0x00004000
+#define DEV_RX_OFFLOAD_TIMESTAMP RTE_ETH_RX_OFFLOAD_TIMESTAMP
+#define RTE_ETH_RX_OFFLOAD_SECURITY 0x00008000
+#define DEV_RX_OFFLOAD_SECURITY RTE_ETH_RX_OFFLOAD_SECURITY
+#define RTE_ETH_RX_OFFLOAD_KEEP_CRC 0x00010000
+#define DEV_RX_OFFLOAD_KEEP_CRC RTE_ETH_RX_OFFLOAD_KEEP_CRC
+#define RTE_ETH_RX_OFFLOAD_SCTP_CKSUM 0x00020000
+#define DEV_RX_OFFLOAD_SCTP_CKSUM RTE_ETH_RX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM 0x00040000
+#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_RSS_HASH 0x00080000
+#define DEV_RX_OFFLOAD_RSS_HASH RTE_ETH_RX_OFFLOAD_RSS_HASH
#define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
-#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM)
-#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_VLAN_EXTEND | \
- DEV_RX_OFFLOAD_QINQ_STRIP)
+#define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_CHECKSUM RTE_ETH_RX_OFFLOAD_CHECKSUM
+#define RTE_ETH_RX_OFFLOAD_VLAN (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+#define DEV_RX_OFFLOAD_VLAN RTE_ETH_RX_OFFLOAD_VLAN
/*
* If new Rx offload capabilities are defined, they also must be
@@ -1390,52 +1595,74 @@ struct rte_eth_conf {
/**
* TX offload capabilities of a device.
*/
-#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
-#define DEV_TX_OFFLOAD_IPV4_CKSUM 0x00000002
-#define DEV_TX_OFFLOAD_UDP_CKSUM 0x00000004
-#define DEV_TX_OFFLOAD_TCP_CKSUM 0x00000008
-#define DEV_TX_OFFLOAD_SCTP_CKSUM 0x00000010
-#define DEV_TX_OFFLOAD_TCP_TSO 0x00000020
-#define DEV_TX_OFFLOAD_UDP_TSO 0x00000040
-#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_QINQ_INSERT 0x00000100
-#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO 0x00000200 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GRE_TNL_TSO 0x00000400 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_IPIP_TNL_TSO 0x00000800 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO 0x00001000 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_MACSEC_INSERT 0x00002000
-#define DEV_TX_OFFLOAD_MT_LOCKFREE 0x00004000
+#define RTE_ETH_TX_OFFLOAD_VLAN_INSERT 0x00000001
+#define DEV_TX_OFFLOAD_VLAN_INSERT RTE_ETH_TX_OFFLOAD_VLAN_INSERT
+#define RTE_ETH_TX_OFFLOAD_IPV4_CKSUM 0x00000002
+#define DEV_TX_OFFLOAD_IPV4_CKSUM RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_UDP_CKSUM 0x00000004
+#define DEV_TX_OFFLOAD_UDP_CKSUM RTE_ETH_TX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_CKSUM 0x00000008
+#define DEV_TX_OFFLOAD_TCP_CKSUM RTE_ETH_TX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_SCTP_CKSUM 0x00000010
+#define DEV_TX_OFFLOAD_SCTP_CKSUM RTE_ETH_TX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_TSO 0x00000020
+#define DEV_TX_OFFLOAD_TCP_TSO RTE_ETH_TX_OFFLOAD_TCP_TSO
+#define RTE_ETH_TX_OFFLOAD_UDP_TSO 0x00000040
+#define DEV_TX_OFFLOAD_UDP_TSO RTE_ETH_TX_OFFLOAD_UDP_TSO
+#define RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_QINQ_INSERT 0x00000100
+#define DEV_TX_OFFLOAD_QINQ_INSERT RTE_ETH_TX_OFFLOAD_QINQ_INSERT
+#define RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO 0x00000200 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO 0x00000400 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GRE_TNL_TSO RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO 0x00000800 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_IPIP_TNL_TSO RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO 0x00001000 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_MACSEC_INSERT 0x00002000
+#define DEV_TX_OFFLOAD_MACSEC_INSERT RTE_ETH_TX_OFFLOAD_MACSEC_INSERT
+#define RTE_ETH_TX_OFFLOAD_MT_LOCKFREE 0x00004000
+#define DEV_TX_OFFLOAD_MT_LOCKFREE RTE_ETH_TX_OFFLOAD_MT_LOCKFREE
/**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
* tx queue without SW lock.
*/
-#define DEV_TX_OFFLOAD_MULTI_SEGS 0x00008000
+#define RTE_ETH_TX_OFFLOAD_MULTI_SEGS 0x00008000
+#define DEV_TX_OFFLOAD_MULTI_SEGS RTE_ETH_TX_OFFLOAD_MULTI_SEGS
/**< Device supports multi segment send. */
-#define DEV_TX_OFFLOAD_MBUF_FAST_FREE 0x00010000
+#define RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE 0x00010000
+#define DEV_TX_OFFLOAD_MBUF_FAST_FREE RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
/**< Device supports optimization for fast release of mbufs.
* When set application must guarantee that per-queue all mbufs comes from
* the same mempool and has refcnt = 1.
*/
-#define DEV_TX_OFFLOAD_SECURITY 0x00020000
+#define RTE_ETH_TX_OFFLOAD_SECURITY 0x00020000
+#define DEV_TX_OFFLOAD_SECURITY RTE_ETH_TX_OFFLOAD_SECURITY
/**
* Device supports generic UDP tunneled packet TSO.
* Application must set PKT_TX_TUNNEL_UDP and other mbuf fields required
* for tunnel TSO.
*/
-#define DEV_TX_OFFLOAD_UDP_TNL_TSO 0x00040000
+#define RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO 0x00040000
+#define DEV_TX_OFFLOAD_UDP_TNL_TSO RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO
/**
* Device supports generic IP tunneled packet TSO.
* Application must set PKT_TX_TUNNEL_IP and other mbuf fields required
* for tunnel TSO.
*/
-#define DEV_TX_OFFLOAD_IP_TNL_TSO 0x00080000
+#define RTE_ETH_TX_OFFLOAD_IP_TNL_TSO 0x00080000
+#define DEV_TX_OFFLOAD_IP_TNL_TSO RTE_ETH_TX_OFFLOAD_IP_TNL_TSO
/** Device supports outer UDP checksum */
-#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM 0x00100000
+#define RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM 0x00100000
+#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM
/**
* Device sends on time read from RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
* if RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME is set in ol_flags.
* The mbuf field and flag are registered when the offload is configured.
*/
-#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP
/*
* If new Tx offload capabilities are defined, they also must be
* mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1567,7 +1794,7 @@ struct rte_eth_dev_info {
uint16_t vmdq_pool_base; /**< First ID of VMDQ pools. */
struct rte_eth_desc_lim rx_desc_lim; /**< RX descriptors limits */
struct rte_eth_desc_lim tx_desc_lim; /**< TX descriptors limits */
- uint32_t speed_capa; /**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+ uint32_t speed_capa; /**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
/** Configured number of rx/tx queues */
uint16_t nb_rx_queues; /**< Number of RX queues. */
uint16_t nb_tx_queues; /**< Number of TX queues. */
@@ -1672,8 +1899,10 @@ struct rte_eth_xstat_name {
char name[RTE_ETH_XSTATS_NAME_SIZE]; /**< The statistic name. */
};
-#define ETH_DCB_NUM_TCS 8
-#define ETH_MAX_VMDQ_POOL 64
+#define RTE_ETH_DCB_NUM_TCS 8
+#define ETH_DCB_NUM_TCS RTE_ETH_DCB_NUM_TCS
+#define RTE_ETH_MAX_VMDQ_POOL 64
+#define ETH_MAX_VMDQ_POOL RTE_ETH_MAX_VMDQ_POOL
/**
* A structure used to get the information of queue and
@@ -1684,12 +1913,12 @@ struct rte_eth_dcb_tc_queue_mapping {
struct {
uint16_t base;
uint16_t nb_queue;
- } tc_rxq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+ } tc_rxq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
/** rx queues assigned to tc per Pool */
struct {
uint16_t base;
uint16_t nb_queue;
- } tc_txq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+ } tc_txq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
};
/**
@@ -1698,8 +1927,8 @@ struct rte_eth_dcb_tc_queue_mapping {
*/
struct rte_eth_dcb_info {
uint8_t nb_tcs; /**< number of TCs */
- uint8_t prio_tc[ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
- uint8_t tc_bws[ETH_DCB_NUM_TCS]; /**< TX BW percentage for each TC */
+ uint8_t prio_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
+ uint8_t tc_bws[RTE_ETH_DCB_NUM_TCS]; /**< TX BW percentage for each TC */
/** rx queues assigned to tc */
struct rte_eth_dcb_tc_queue_mapping tc_queue;
};
@@ -1723,7 +1952,7 @@ enum rte_eth_fec_mode {
/* A structure used to get capabilities per link speed */
struct rte_eth_fec_capa {
- uint32_t speed; /**< Link speed (see ETH_SPEED_NUM_*) */
+ uint32_t speed; /**< Link speed (see RTE_ETH_SPEED_NUM_*) */
uint32_t capa; /**< FEC capabilities bitmask */
};
@@ -1749,13 +1978,17 @@ struct rte_eth_fec_capa {
*/
/**< l2 tunnel enable mask */
-#define ETH_L2_TUNNEL_ENABLE_MASK 0x00000001
+#define RTE_ETH_L2_TUNNEL_ENABLE_MASK 0x00000001
+#define ETH_L2_TUNNEL_ENABLE_MASK RTE_ETH_L2_TUNNEL_ENABLE_MASK
/**< l2 tunnel insertion mask */
-#define ETH_L2_TUNNEL_INSERTION_MASK 0x00000002
+#define RTE_ETH_L2_TUNNEL_INSERTION_MASK 0x00000002
+#define ETH_L2_TUNNEL_INSERTION_MASK RTE_ETH_L2_TUNNEL_INSERTION_MASK
/**< l2 tunnel stripping mask */
-#define ETH_L2_TUNNEL_STRIPPING_MASK 0x00000004
+#define RTE_ETH_L2_TUNNEL_STRIPPING_MASK 0x00000004
+#define ETH_L2_TUNNEL_STRIPPING_MASK RTE_ETH_L2_TUNNEL_STRIPPING_MASK
/**< l2 tunnel forwarding mask */
-#define ETH_L2_TUNNEL_FORWARDING_MASK 0x00000008
+#define RTE_ETH_L2_TUNNEL_FORWARDING_MASK 0x00000008
+#define ETH_L2_TUNNEL_FORWARDING_MASK RTE_ETH_L2_TUNNEL_FORWARDING_MASK
/**
* Function type used for RX packet processing packet callbacks.
@@ -2068,14 +2301,14 @@ uint16_t rte_eth_dev_count_total(void);
* @param speed
* Numerical speed value in Mbps
* @param duplex
- * ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
+ * RTE_ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
* @return
* 0 if the speed cannot be mapped
*/
uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
/**
- * Get DEV_RX_OFFLOAD_* flag name.
+ * Get RTE_ETH_RX_OFFLOAD_* flag name.
*
* @param offload
* Offload flag.
@@ -2085,7 +2318,7 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
const char *rte_eth_dev_rx_offload_name(uint64_t offload);
/**
- * Get DEV_TX_OFFLOAD_* flag name.
+ * Get RTE_ETH_TX_OFFLOAD_* flag name.
*
* @param offload
* Offload flag.
@@ -2179,7 +2412,7 @@ rte_eth_dev_is_removed(uint16_t port_id);
* of the Prefetch, Host, and Write-Back threshold registers of the receive
* ring.
* In addition it contains the hardware offloads features to activate using
- * the DEV_RX_OFFLOAD_* flags.
+ * the RTE_ETH_RX_OFFLOAD_* flags.
* If an offloading set in rx_conf->offloads
* hasn't been set in the input argument eth_conf->rxmode.offloads
* to rte_eth_dev_configure(), it is a new added offloading, it must be
@@ -2756,7 +2989,7 @@ const char *rte_eth_link_speed_to_str(uint32_t link_speed);
*
* @param str
* A pointer to a string to be filled with textual representation of
- * device status. At least ETH_LINK_MAX_STR_LEN bytes should be allocated to
+ * device status. At least RTE_ETH_LINK_MAX_STR_LEN bytes should be allocated to
* store default link status text.
* @param len
* Length of available memory at 'str' string.
@@ -3261,10 +3494,10 @@ int rte_eth_dev_set_vlan_ether_type(uint16_t port_id,
* The port identifier of the Ethernet device.
* @param offload_mask
* The VLAN Offload bit mask can be mixed use with "OR"
- * ETH_VLAN_STRIP_OFFLOAD
- * ETH_VLAN_FILTER_OFFLOAD
- * ETH_VLAN_EXTEND_OFFLOAD
- * ETH_QINQ_STRIP_OFFLOAD
+ * RTE_ETH_VLAN_STRIP_OFFLOAD
+ * RTE_ETH_VLAN_FILTER_OFFLOAD
+ * RTE_ETH_VLAN_EXTEND_OFFLOAD
+ * RTE_ETH_QINQ_STRIP_OFFLOAD
* @return
* - (0) if successful.
* - (-ENOTSUP) if hardware-assisted VLAN filtering not configured.
@@ -3280,10 +3513,10 @@ int rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask);
* The port identifier of the Ethernet device.
* @return
* - (>0) if successful. Bit mask to indicate
- * ETH_VLAN_STRIP_OFFLOAD
- * ETH_VLAN_FILTER_OFFLOAD
- * ETH_VLAN_EXTEND_OFFLOAD
- * ETH_QINQ_STRIP_OFFLOAD
+ * RTE_ETH_VLAN_STRIP_OFFLOAD
+ * RTE_ETH_VLAN_FILTER_OFFLOAD
+ * RTE_ETH_VLAN_EXTEND_OFFLOAD
+ * RTE_ETH_QINQ_STRIP_OFFLOAD
* - (-ENODEV) if *port_id* invalid.
*/
int rte_eth_dev_get_vlan_offload(uint16_t port_id);
@@ -5231,7 +5464,7 @@ static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
* rte_eth_tx_burst() function must [attempt to] free the *rte_mbuf* buffers
* of those packets whose transmission was effectively completed.
*
- * If the PMD is DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+ * If the PMD is RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
* invoke this function concurrently on the same tx queue without SW lock.
* @see rte_eth_dev_info_get, struct rte_eth_txconf::offloads
*
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc2e..8e6156a62aa9 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -154,7 +154,7 @@ struct rte_eth_dev_data {
/**< Device Ethernet link address.
* @see rte_eth_dev_release_port()
*/
- uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+ uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
/**< Bitmap associating MAC addresses to pools. */
struct rte_ether_addr *hash_mac_addrs;
/**< Device Ethernet MAC addresses of hash filtering.
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 70f455d47d60..4152067368b8 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2593,7 +2593,7 @@ struct rte_flow_action_rss {
* through.
*/
uint32_t level;
- uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
uint32_t key_len; /**< Hash key length in bytes. */
uint32_t queue_num; /**< Number of entries in @p queue. */
const uint8_t *key; /**< Hash key. */
diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c
index 0d02ec3cee05..119fdcac0b7f 100644
--- a/lib/gso/rte_gso.c
+++ b/lib/gso/rte_gso.c
@@ -15,13 +15,13 @@
#include "gso_udp4.h"
#define ILLEGAL_UDP_GSO_CTX(ctx) \
- ((((ctx)->gso_types & DEV_TX_OFFLOAD_UDP_TSO) == 0) || \
+ ((((ctx)->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO) == 0) || \
(ctx)->gso_size < RTE_GSO_UDP_SEG_SIZE_MIN)
#define ILLEGAL_TCP_GSO_CTX(ctx) \
- ((((ctx)->gso_types & (DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
+ ((((ctx)->gso_types & (RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
(ctx)->gso_size < RTE_GSO_SEG_SIZE_MIN)
int
@@ -54,28 +54,28 @@ rte_gso_segment(struct rte_mbuf *pkt,
ol_flags = pkt->ol_flags;
if ((IS_IPV4_VXLAN_TCP4(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
((IS_IPV4_GRE_TCP4(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_GRE_TNL_TSO)))) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))) {
pkt->ol_flags &= (~PKT_TX_TCP_SEG);
ret = gso_tunnel_tcp4_segment(pkt, gso_size, ipid_delta,
direct_pool, indirect_pool,
pkts_out, nb_pkts_out);
} else if (IS_IPV4_VXLAN_UDP4(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) &&
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
pkt->ol_flags &= (~PKT_TX_UDP_SEG);
ret = gso_tunnel_udp4_segment(pkt, gso_size,
direct_pool, indirect_pool,
pkts_out, nb_pkts_out);
} else if (IS_IPV4_TCP(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_TCP_TSO)) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
pkt->ol_flags &= (~PKT_TX_TCP_SEG);
ret = gso_tcp4_segment(pkt, gso_size, ipid_delta,
direct_pool, indirect_pool,
pkts_out, nb_pkts_out);
} else if (IS_IPV4_UDP(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
pkt->ol_flags &= (~PKT_TX_UDP_SEG);
ret = gso_udp4_segment(pkt, gso_size, direct_pool,
indirect_pool, pkts_out, nb_pkts_out);
diff --git a/lib/gso/rte_gso.h b/lib/gso/rte_gso.h
index d93ee8e5b171..0a65afc11e64 100644
--- a/lib/gso/rte_gso.h
+++ b/lib/gso/rte_gso.h
@@ -52,11 +52,11 @@ struct rte_gso_ctx {
uint32_t gso_types;
/**< the bit mask of required GSO types. The GSO library
* uses the same macros as that of describing device TX
- * offloading capabilities (i.e. DEV_TX_OFFLOAD_*_TSO) for
+ * offloading capabilities (i.e. RTE_ETH_TX_OFFLOAD_*_TSO) for
* gso_types.
*
* For example, if applications want to segment TCP/IPv4
- * packets, set DEV_TX_OFFLOAD_TCP_TSO in gso_types.
+ * packets, set RTE_ETH_TX_OFFLOAD_TCP_TSO in gso_types.
*/
uint16_t gso_size;
/**< maximum size of an output GSO segment, including packet
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index bb38d7f58102..50e611e887bf 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -192,7 +192,7 @@ extern "C" {
* The detection of PKT_RX_OUTER_L4_CKSUM_GOOD shall be based on the given
* HW capability, At minimum, the PMD should support
* PKT_RX_OUTER_L4_CKSUM_UNKNOWN and PKT_RX_OUTER_L4_CKSUM_BAD states
- * if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
+ * if the RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
*/
#define PKT_RX_OUTER_L4_CKSUM_MASK ((1ULL << 21) | (1ULL << 22))
@@ -215,7 +215,7 @@ extern "C" {
* a) Fill outer_l2_len and outer_l3_len in mbuf.
* b) Set the PKT_TX_OUTER_UDP_CKSUM flag.
* c) Set the PKT_TX_OUTER_IPV4 or PKT_TX_OUTER_IPV6 flag.
- * 2) Configure DEV_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
+ * 2) Configure RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
*/
#define PKT_TX_OUTER_UDP_CKSUM (1ULL << 41)
@@ -258,7 +258,7 @@ extern "C" {
* It can be used for tunnels which are not standards or listed above.
* It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_GRE
* or PKT_TX_TUNNEL_IPIP if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_IP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_IP_TNL_TSO.
* Outer and inner checksums are done according to the existing flags like
* PKT_TX_xxx_CKSUM.
* Specific tunnel headers that contain payload length, sequence id
@@ -271,7 +271,7 @@ extern "C" {
* It can be used for tunnels which are not standards or listed above.
* It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_VXLAN
* if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_UDP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO.
* Outer and inner checksums are done according to the existing flags like
* PKT_TX_xxx_CKSUM.
* Specific tunnel headers that contain payload length, sequence id
diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
index 13f06d8ed25b..be43f8c328e1 100644
--- a/lib/mbuf/rte_mbuf_dyn.h
+++ b/lib/mbuf/rte_mbuf_dyn.h
@@ -37,7 +37,7 @@
* of the dynamic field to be registered:
* const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
* - The application initializes the PMD, and asks for this feature
- * at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
+ * at port initialization by passing RTE_ETH_RX_OFFLOAD_MY_FEATURE in
* rxconf. This will make the PMD to register the field by calling
* rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
* stores the returned offset.
--
2.31.1
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [RFC] eventdev: uninline inline API functions
@ 2021-08-30 16:00 2% ` Mattias Rönnblom
2021-08-31 12:28 0% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Mattias Rönnblom @ 2021-08-30 16:00 UTC (permalink / raw)
To: jerinj; +Cc: pbhagavatula, dev, bogdan.tanasa, Mattias Rönnblom
Replace the inline functions in the eventdev user application API with
regular non-inline API calls. This allows for a cleaner and more
simple API/ABI, but might well also cause performance regressions.
The purpose of this RFC patch is to allow for performance testing.
The rte_eventdev struct declaration should be moved off the public
API.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
drivers/net/octeontx/octeontx_ethdev.h | 1 +
lib/eventdev/rte_event_eth_rx_adapter.h | 1 +
lib/eventdev/rte_event_eth_tx_adapter.c | 31 ++++++++
lib/eventdev/rte_event_eth_tx_adapter.h | 35 ++-------
lib/eventdev/rte_eventdev.c | 82 +++++++++++++++++++++
lib/eventdev/rte_eventdev.h | 94 +++----------------------
lib/eventdev/version.map | 4 ++
7 files changed, 134 insertions(+), 114 deletions(-)
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index b73515de37..9402105fcf 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -9,6 +9,7 @@
#include <rte_common.h>
#include <ethdev_driver.h>
+#include <eventdev_pmd.h>
#include <rte_eventdev.h>
#include <rte_mempool.h>
#include <rte_memory.h>
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h b/lib/eventdev/rte_event_eth_rx_adapter.h
index 182dd2e5dd..79f4822fb0 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.h
+++ b/lib/eventdev/rte_event_eth_rx_adapter.h
@@ -84,6 +84,7 @@ extern "C" {
#include <rte_service.h>
#include "rte_eventdev.h"
+#include "eventdev_pmd.h"
#define RTE_EVENT_ETH_RX_ADAPTER_MAX_INSTANCE 32
diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
index 18c0359db7..74f88e6147 100644
--- a/lib/eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/eventdev/rte_event_eth_tx_adapter.c
@@ -1154,6 +1154,37 @@ rte_event_eth_tx_adapter_start(uint8_t id)
return ret;
}
+uint16_t
+rte_event_eth_tx_adapter_enqueue(uint8_t dev_id,
+ uint8_t port_id,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ const uint8_t flags)
+{
+ const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+ if (dev_id >= RTE_EVENT_MAX_DEVS ||
+ !rte_eventdevs[dev_id].attached) {
+ rte_errno = EINVAL;
+ return 0;
+ }
+
+ if (port_id >= dev->data->nb_ports) {
+ rte_errno = EINVAL;
+ return 0;
+ }
+#endif
+ rte_eventdev_trace_eth_tx_adapter_enqueue(dev_id, port_id, ev,
+ nb_events, flags);
+ if (flags)
+ return dev->txa_enqueue_same_dest(dev->data->ports[port_id],
+ ev, nb_events);
+ else
+ return dev->txa_enqueue(dev->data->ports[port_id], ev,
+ nb_events);
+}
+
int
rte_event_eth_tx_adapter_stats_get(uint8_t id,
struct rte_event_eth_tx_adapter_stats *stats)
diff --git a/lib/eventdev/rte_event_eth_tx_adapter.h b/lib/eventdev/rte_event_eth_tx_adapter.h
index 8c59547165..3cd65e8a09 100644
--- a/lib/eventdev/rte_event_eth_tx_adapter.h
+++ b/lib/eventdev/rte_event_eth_tx_adapter.h
@@ -79,6 +79,7 @@ extern "C" {
#include <rte_mbuf.h>
#include "rte_eventdev.h"
+#include "eventdev_pmd.h"
/**
* Adapter configuration structure
@@ -348,36 +349,12 @@ rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id);
* one or more events. This error code is only applicable to
* closed systems.
*/
-static inline uint16_t
+uint16_t
rte_event_eth_tx_adapter_enqueue(uint8_t dev_id,
- uint8_t port_id,
- struct rte_event ev[],
- uint16_t nb_events,
- const uint8_t flags)
-{
- const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
-
-#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
- if (dev_id >= RTE_EVENT_MAX_DEVS ||
- !rte_eventdevs[dev_id].attached) {
- rte_errno = EINVAL;
- return 0;
- }
-
- if (port_id >= dev->data->nb_ports) {
- rte_errno = EINVAL;
- return 0;
- }
-#endif
- rte_eventdev_trace_eth_tx_adapter_enqueue(dev_id, port_id, ev,
- nb_events, flags);
- if (flags)
- return dev->txa_enqueue_same_dest(dev->data->ports[port_id],
- ev, nb_events);
- else
- return dev->txa_enqueue(dev->data->ports[port_id], ev,
- nb_events);
-}
+ uint8_t port_id,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ const uint8_t flags);
/**
* Retrieve statistics for an adapter
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 594dd5e759..e2dad8a838 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -1119,6 +1119,65 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
return count;
}
+static __rte_always_inline uint16_t
+__rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
+ const struct rte_event ev[], uint16_t nb_events,
+ const event_enqueue_burst_t fn)
+{
+ const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+ if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
+ rte_errno = EINVAL;
+ return 0;
+ }
+
+ if (port_id >= dev->data->nb_ports) {
+ rte_errno = EINVAL;
+ return 0;
+ }
+#endif
+ rte_eventdev_trace_enq_burst(dev_id, port_id, ev, nb_events, fn);
+ /*
+ * Allow zero cost non burst mode routine invocation if application
+ * requests nb_events as const one
+ */
+ if (nb_events == 1)
+ return (*dev->enqueue)(dev->data->ports[port_id], ev);
+ else
+ return fn(dev->data->ports[port_id], ev, nb_events);
+}
+
+uint16_t
+rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
+ const struct rte_event ev[], uint16_t nb_events)
+{
+ const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+ return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
+ dev->enqueue_burst);
+}
+
+uint16_t
+rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
+ const struct rte_event ev[], uint16_t nb_events)
+{
+ const struct rte_eventdev *dev = &rte_event_devices[dev_id];
+
+ return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
+ dev->enqueue_new_burst);
+}
+
+uint16_t
+rte_event_enqueue_forward_burst(uint8_t dev_id, uint8_t port_id,
+ const struct rte_event ev[], uint16_t nb_events)
+{
+ const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+ return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
+ dev->enqueue_forward_burst);
+}
+
int
rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
uint64_t *timeout_ticks)
@@ -1135,6 +1194,29 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
return (*dev->dev_ops->timeout_ticks)(dev, ns, timeout_ticks);
}
+uint16_t
+rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ struct rte_eventdev *dev = &rte_event_devices[dev_id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+ if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
+ rte_errno = EINVAL;
+ return 0;
+ }
+
+ if (port_id >= dev->data->nb_ports) {
+ rte_errno = EINVAL;
+ return 0;
+ }
+#endif
+ rte_eventdev_trace_deq_burst(dev_id, port_id, ev, nb_events);
+
+ return (*dev->dequeue_burst)(dev->data->ports[port_id], ev, nb_events,
+ timeout_ticks);
+}
+
int
rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id)
{
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index a9c496fb62..451e9fb0a0 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1445,38 +1445,6 @@ struct rte_eventdev {
void *reserved_ptrs[3]; /**< Reserved for future fields */
} __rte_cache_aligned;
-extern struct rte_eventdev *rte_eventdevs;
-/** @internal The pool of rte_eventdev structures. */
-
-static __rte_always_inline uint16_t
-__rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
- const struct rte_event ev[], uint16_t nb_events,
- const event_enqueue_burst_t fn)
-{
- const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
-
-#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
- if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
- rte_errno = EINVAL;
- return 0;
- }
-
- if (port_id >= dev->data->nb_ports) {
- rte_errno = EINVAL;
- return 0;
- }
-#endif
- rte_eventdev_trace_enq_burst(dev_id, port_id, ev, nb_events, fn);
- /*
- * Allow zero cost non burst mode routine invocation if application
- * requests nb_events as const one
- */
- if (nb_events == 1)
- return (*dev->enqueue)(dev->data->ports[port_id], ev);
- else
- return fn(dev->data->ports[port_id], ev, nb_events);
-}
-
/**
* Enqueue a burst of events objects or an event object supplied in *rte_event*
* structure on an event device designated by its *dev_id* through the event
@@ -1520,15 +1488,9 @@ __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
* closed systems.
* @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
*/
-static inline uint16_t
+uint16_t
rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
- const struct rte_event ev[], uint16_t nb_events)
-{
- const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
-
- return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
- dev->enqueue_burst);
-}
+ const struct rte_event ev[], uint16_t nb_events);
/**
* Enqueue a burst of events objects of operation type *RTE_EVENT_OP_NEW* on
@@ -1571,15 +1533,9 @@ rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
* @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
* @see rte_event_enqueue_burst()
*/
-static inline uint16_t
+uint16_t
rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
- const struct rte_event ev[], uint16_t nb_events)
-{
- const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
-
- return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
- dev->enqueue_new_burst);
-}
+ const struct rte_event ev[], uint16_t nb_events);
/**
* Enqueue a burst of events objects of operation type *RTE_EVENT_OP_FORWARD*
@@ -1622,15 +1578,10 @@ rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
* @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
* @see rte_event_enqueue_burst()
*/
-static inline uint16_t
+uint16_t
rte_event_enqueue_forward_burst(uint8_t dev_id, uint8_t port_id,
- const struct rte_event ev[], uint16_t nb_events)
-{
- const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
-
- return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
- dev->enqueue_forward_burst);
-}
+ const struct rte_event ev[],
+ uint16_t nb_events);
/**
* Converts nanoseconds to *timeout_ticks* value for rte_event_dequeue_burst()
@@ -1727,36 +1678,9 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
*
* @see rte_event_port_dequeue_depth()
*/
-static inline uint16_t
+uint16_t
rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
- uint16_t nb_events, uint64_t timeout_ticks)
-{
- struct rte_eventdev *dev = &rte_eventdevs[dev_id];
-
-#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
- if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
- rte_errno = EINVAL;
- return 0;
- }
-
- if (port_id >= dev->data->nb_ports) {
- rte_errno = EINVAL;
- return 0;
- }
-#endif
- rte_eventdev_trace_deq_burst(dev_id, port_id, ev, nb_events);
- /*
- * Allow zero cost non burst mode routine invocation if application
- * requests nb_events as const one
- */
- if (nb_events == 1)
- return (*dev->dequeue)(
- dev->data->ports[port_id], ev, timeout_ticks);
- else
- return (*dev->dequeue_burst)(
- dev->data->ports[port_id], ev, nb_events,
- timeout_ticks);
-}
+ uint16_t nb_events, uint64_t timeout_ticks);
/**
* Link multiple source event queues supplied in *queues* to the destination
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index 88625621ec..8da79cbdc0 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -13,7 +13,11 @@ DPDK_22 {
rte_event_crypto_adapter_stats_get;
rte_event_crypto_adapter_stats_reset;
rte_event_crypto_adapter_stop;
+ rte_event_enqueue_burst;
+ rte_event_enqueue_new_burst;
+ rte_event_enqueue_forward_burst;
rte_event_dequeue_timeout_ticks;
+ rte_event_dequeue_burst;
rte_event_dev_attr_get;
rte_event_dev_close;
rte_event_dev_configure;
--
2.17.1
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [PATCH v2 01/15] ethdev: introduce shared Rx queue
2021-08-30 9:31 3% ` Jerin Jacob
@ 2021-08-30 10:13 0% ` Xueming(Steven) Li
0 siblings, 0 replies; 200+ results
From: Xueming(Steven) Li @ 2021-08-30 10:13 UTC (permalink / raw)
To: Jerin Jacob
Cc: dpdk-dev, Ferruh Yigit, NBU-Contact-Thomas Monjalon, Andrew Rybchenko
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Monday, August 30, 2021 5:31 PM
> To: Xueming(Steven) Li <xuemingl@nvidia.com>
> Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
>
> On Sat, Aug 28, 2021 at 7:46 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Thursday, August 26, 2021 7:58 PM
> > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > > NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Andrew Rybchenko
> > > <andrew.rybchenko@oktetlabs.ru>
> > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > >
> > > On Thu, Aug 19, 2021 at 5:39 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > Sent: Thursday, August 19, 2021 1:27 PM
> > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > > > >
> > > > > On Wed, Aug 18, 2021 at 4:44 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > > > >
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > Sent: Tuesday, August 17, 2021 11:12 PM
> > > > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx
> > > > > > > queue
> > > > > > >
> > > > > > > On Tue, Aug 17, 2021 at 5:01 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > > > Sent: Tuesday, August 17, 2021 5:33 PM
> > > > > > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared
> > > > > > > > > Rx queue
> > > > > > > > >
> > > > > > > > > On Wed, Aug 11, 2021 at 7:34 PM Xueming Li <xuemingl@nvidia.com> wrote:
> > > > > > > > > >
> > > > > > > > > > In current DPDK framework, each RX queue is pre-loaded
> > > > > > > > > > with mbufs for incoming packets. When number of
> > > > > > > > > > representors scale out in a switch domain, the memory
> > > > > > > > > > consumption became significant. Most important,
> > > > > > > > > > polling all ports leads to high cache miss, high latency and low throughput.
> > > > > > > > > >
> > > > > > > > > > This patch introduces shared RX queue. Ports with same
> > > > > > > > > > configuration in a switch domain could share RX queue set by specifying sharing group.
> > > > > > > > > > Polling any queue using same shared RX queue receives
> > > > > > > > > > packets from all member ports. Source port is identified by mbuf->port.
> > > > > > > > > >
> > > > > > > > > > Port queue number in a shared group should be identical.
> > > > > > > > > > Queue index is
> > > > > > > > > > 1:1 mapped in shared group.
> > > > > > > > > >
> > > > > > > > > > Share RX queue must be polled on single thread or core.
> > > > > > > > > >
> > > > > > > > > > Multiple groups is supported by group ID.
> > > > > > > > > >
> > > > > > > > > > Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> > > > > > > > > > Cc: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > > > > ---
> > > > > > > > > > Rx queue object could be used as shared Rx queue
> > > > > > > > > > object, it's important to clear all queue control callback api that using queue object:
> > > > > > > > > >
> > > > > > > > > > https://mails.dpdk.org/archives/dev/2021-July/215574.h
> > > > > > > > > > tml
> > > > > > > > >
> > > > > > > > > > #undef RTE_RX_OFFLOAD_BIT2STR diff --git
> > > > > > > > > > a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> > > > > > > > > > index d2b27c351f..a578c9db9d 100644
> > > > > > > > > > --- a/lib/ethdev/rte_ethdev.h
> > > > > > > > > > +++ b/lib/ethdev/rte_ethdev.h
> > > > > > > > > > @@ -1047,6 +1047,7 @@ struct rte_eth_rxconf {
> > > > > > > > > > uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
> > > > > > > > > > uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
> > > > > > > > > > uint16_t rx_nseg; /**< Number of descriptions in rx_seg array.
> > > > > > > > > > */
> > > > > > > > > > + uint32_t shared_group; /**< Shared port group
> > > > > > > > > > + index in switch domain. */
> > > > > > > > >
> > > > > > > > > Not to able to see anyone setting/creating this group ID test application.
> > > > > > > > > How this group is created?
> > > > > > > >
> > > > > > > > Nice catch, the initial testpmd version only support one default group(0).
> > > > > > > > All ports that supports shared-rxq assigned in same group.
> > > > > > > >
> > > > > > > > We should be able to change "--rxq-shared" to "--rxq-shared-group"
> > > > > > > > to support group other than default.
> > > > > > > >
> > > > > > > > To support more groups simultaneously, need to consider
> > > > > > > > testpmd forwarding stream core assignment, all streams in same group need to stay on same core.
> > > > > > > > It's possible to specify how many ports to increase group
> > > > > > > > number, but user must schedule stream affinity carefully - error prone.
> > > > > > > >
> > > > > > > > On the other hand, one group should be sufficient for most
> > > > > > > > customer, the doubt is whether it valuable to support multiple groups test.
> > > > > > >
> > > > > > > Ack. One group is enough in testpmd.
> > > > > > >
> > > > > > > My question was more about who and how this group is
> > > > > > > created, Should n't we need API to create shared_group? If
> > > > > > > we do the following, at least, I can think, how it can be
> > > > > > > implemented in SW
> > > or other HW.
> > > > > > >
> > > > > > > - Create aggregation queue group
> > > > > > > - Attach multiple Rx queues to the aggregation queue group
> > > > > > > - Pull the packets from the queue group(which internally
> > > > > > > fetch from the Rx queues _attached_)
> > > > > > >
> > > > > > > Does the above kind of sequence, break your representor use case?
> > > > > >
> > > > > > Seems more like a set of EAL wrapper. Current API tries to minimize the application efforts to adapt shared-rxq.
> > > > > > - step 1, not sure how important it is to create group with API, in rte_flow, group is created on demand.
> > > > >
> > > > > Which rte_flow pattern/action for this?
> > > >
> > > > No rte_flow for this, just recalled that the group in rte_flow is not created along with flow, not via api.
> > > > I don’t see anything else to create along with group, just double whether it valuable to introduce a new api set to manage group.
> > >
> > > See below.
> > >
> > > >
> > > > >
> > > > > > - step 2, currently, the attaching is done in rte_eth_rx_queue_setup, specify offload and group in rx_conf struct.
> > > > > > - step 3, define a dedicate api to receive packets from shared rxq? Looks clear to receive packets from shared rxq.
> > > > > > currently, rxq objects in share group is same - the shared rxq, so the eth callback eth_rx_burst_t(rxq_obj, mbufs, n) could
> > > > > > be used to receive packets from any ports in group, normally the first port(PF) in group.
> > > > > > An alternative way is defining a vdev with same queue number and copy rxq objects will make the vdev a proxy of
> > > > > > the shared rxq group - this could be an helper API.
> > > > > >
> > > > > > Anyway the wrapper doesn't break use case, step 3 api is more clear, need to understand how to implement efficiently.
> > > > >
> > > > > Are you doing this feature based on any HW support or it just
> > > > > pure SW thing, If it is SW, It is better to have just new vdev
> > > > > for like drivers/net/bonding/. This we can help aggregate
> > > > > multiple Rxq across
> > > the multiple ports of same the driver.
> > > >
> > > > Based on HW support.
> > >
> > > In Marvel HW, we do some support, I will outline here and some queries on this.
> > >
> > > # We need to create some new HW structure for aggregation # Connect
> > > each Rxq to the new HW structure for aggregation # Use rx_burst from the new HW structure.
> > >
> > > Could you outline your HW support?
> > >
> > > Also, I am not able to understand how this will reduce the memory,
> > > atleast in our HW need creating more memory now to deal this as we need to deal new HW structure.
> > >
> > > How is in your HW it reduces the memory? Also, if memory is the constraint, why NOT reduce the number of queues.
> > >
> >
> > Glad to know that Marvel is working on this, what's the status of driver implementation?
> >
> > In my PMD implementation, it's very similar, a new HW object shared memory pool is created to replace per rxq memory pool.
> > Legacy rxq feed queue with allocated mbufs as number of descriptors,
> > now shared rxqs share the same pool, no need to supply mbufs for each rxq, just feed the shared rxq.
> >
> > So the memory saving reflects to mbuf per rxq, even 1000 representors in shared rxq group, the mbufs consumed is one rxq.
> > In other words, new members in shared rxq doesn’t allocate new mbufs to feed rxq, just share with existing shared rxq(HW
> mempool).
> > The memory required to setup each rxq doesn't change too much, agree.
>
> We can ask the application to configure the same mempool for multiple RQ too. RIght? If the saving is based on sharing the mempool
> with multiple RQs.
Yes, using the same mempool is the fundamental. The difference is how many mbufs allocate from pool.
Assuming 512 descriptors perf rxq and 4 rxqs per device, it's 2.3K(mbuf) * 512 * 4 = 4.6M / device
To support 1000 representors, need a 4.6G mempool :)
For shared rxq, only 4.6M(one device) mbufs allocate from mempool, they are shared for all rxqs in group.
>
> >
> > > # Also, I was thinking, one way to avoid the fast path or ABI change would like.
> > >
> > > # Driver Initializes one more eth_dev_ops in driver as aggregator
> > > ethdev # devargs of new ethdev or specific API like
> > > drivers/net/bonding/rte_eth_bond.h can take the argument (port, queue) tuples which needs to aggregate by new ethdev port #
> No change in fastpath or ABI is required in this model.
> > >
> >
> > This could be an option to access shared rxq. What's the difference of the new PMD?
>
> No ABI and fast change are required.
>
> > What's the difference of PMD driver to create the new device?
> >
> > Is it important in your implementation? Does it work with existing rx_burst api?
>
> Yes . It will work with the existing rx_burst API.
>
> >
> > >
> > >
> > > > Most user might uses PF in group as the anchor port to rx burst, current definition should be easy for them to migrate.
> > > > but some user might prefer grouping some hot
> > > > plug/unpluggedrepresentors, EAL could provide wrappers, users could do that either due to the strategy not complex enough.
> > > Anyway, welcome any suggestion.
> > > >
> > > > >
> > > > >
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > /**
> > > > > > > > > > * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
> > > > > > > > > > * Only offloads set on rx_queue_offload_capa
> > > > > > > > > > or rx_offload_capa @@ -1373,6 +1374,12 @@ struct
> > > > > > > > > > rte_eth_conf { #define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM 0x00040000
> > > > > > > > > > #define DEV_RX_OFFLOAD_RSS_HASH 0x00080000
> > > > > > > > > > #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
> > > > > > > > > > +/**
> > > > > > > > > > + * Rx queue is shared among ports in same switch
> > > > > > > > > > +domain to save memory,
> > > > > > > > > > + * avoid polling each port. Any port in group can be used to receive packets.
> > > > > > > > > > + * Real source port number saved in mbuf->port field.
> > > > > > > > > > + */
> > > > > > > > > > +#define RTE_ETH_RX_OFFLOAD_SHARED_RXQ 0x00200000
> > > > > > > > > >
> > > > > > > > > > #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
> > > > > > > > > >
> > > > > > > > > > DEV_RX_OFFLOAD_UDP_CKSUM
> > > > > > > > > > | \
> > > > > > > > > > --
> > > > > > > > > > 2.25.1
> > > > > > > > > >
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 01/15] ethdev: introduce shared Rx queue
2021-08-28 14:16 0% ` Xueming(Steven) Li
@ 2021-08-30 9:31 3% ` Jerin Jacob
2021-08-30 10:13 0% ` Xueming(Steven) Li
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-08-30 9:31 UTC (permalink / raw)
To: Xueming(Steven) Li
Cc: dpdk-dev, Ferruh Yigit, NBU-Contact-Thomas Monjalon, Andrew Rybchenko
On Sat, Aug 28, 2021 at 7:46 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Thursday, August 26, 2021 7:58 PM
> > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> > Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> >
> > On Thu, Aug 19, 2021 at 5:39 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > Sent: Thursday, August 19, 2021 1:27 PM
> > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > > > NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Andrew Rybchenko
> > > > <andrew.rybchenko@oktetlabs.ru>
> > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > > >
> > > > On Wed, Aug 18, 2021 at 4:44 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > > >
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > Sent: Tuesday, August 17, 2021 11:12 PM
> > > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > > > > >
> > > > > > On Tue, Aug 17, 2021 at 5:01 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > > Sent: Tuesday, August 17, 2021 5:33 PM
> > > > > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx
> > > > > > > > queue
> > > > > > > >
> > > > > > > > On Wed, Aug 11, 2021 at 7:34 PM Xueming Li <xuemingl@nvidia.com> wrote:
> > > > > > > > >
> > > > > > > > > In current DPDK framework, each RX queue is pre-loaded
> > > > > > > > > with mbufs for incoming packets. When number of
> > > > > > > > > representors scale out in a switch domain, the memory
> > > > > > > > > consumption became significant. Most important, polling
> > > > > > > > > all ports leads to high cache miss, high latency and low throughput.
> > > > > > > > >
> > > > > > > > > This patch introduces shared RX queue. Ports with same
> > > > > > > > > configuration in a switch domain could share RX queue set by specifying sharing group.
> > > > > > > > > Polling any queue using same shared RX queue receives
> > > > > > > > > packets from all member ports. Source port is identified by mbuf->port.
> > > > > > > > >
> > > > > > > > > Port queue number in a shared group should be identical.
> > > > > > > > > Queue index is
> > > > > > > > > 1:1 mapped in shared group.
> > > > > > > > >
> > > > > > > > > Share RX queue must be polled on single thread or core.
> > > > > > > > >
> > > > > > > > > Multiple groups is supported by group ID.
> > > > > > > > >
> > > > > > > > > Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> > > > > > > > > Cc: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > > > ---
> > > > > > > > > Rx queue object could be used as shared Rx queue object,
> > > > > > > > > it's important to clear all queue control callback api that using queue object:
> > > > > > > > >
> > > > > > > > > https://mails.dpdk.org/archives/dev/2021-July/215574.html
> > > > > > > >
> > > > > > > > > #undef RTE_RX_OFFLOAD_BIT2STR diff --git
> > > > > > > > > a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> > > > > > > > > d2b27c351f..a578c9db9d 100644
> > > > > > > > > --- a/lib/ethdev/rte_ethdev.h
> > > > > > > > > +++ b/lib/ethdev/rte_ethdev.h
> > > > > > > > > @@ -1047,6 +1047,7 @@ struct rte_eth_rxconf {
> > > > > > > > > uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
> > > > > > > > > uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
> > > > > > > > > uint16_t rx_nseg; /**< Number of descriptions in rx_seg array.
> > > > > > > > > */
> > > > > > > > > + uint32_t shared_group; /**< Shared port group
> > > > > > > > > + index in switch domain. */
> > > > > > > >
> > > > > > > > Not to able to see anyone setting/creating this group ID test application.
> > > > > > > > How this group is created?
> > > > > > >
> > > > > > > Nice catch, the initial testpmd version only support one default group(0).
> > > > > > > All ports that supports shared-rxq assigned in same group.
> > > > > > >
> > > > > > > We should be able to change "--rxq-shared" to "--rxq-shared-group"
> > > > > > > to support group other than default.
> > > > > > >
> > > > > > > To support more groups simultaneously, need to consider
> > > > > > > testpmd forwarding stream core assignment, all streams in same group need to stay on same core.
> > > > > > > It's possible to specify how many ports to increase group
> > > > > > > number, but user must schedule stream affinity carefully - error prone.
> > > > > > >
> > > > > > > On the other hand, one group should be sufficient for most
> > > > > > > customer, the doubt is whether it valuable to support multiple groups test.
> > > > > >
> > > > > > Ack. One group is enough in testpmd.
> > > > > >
> > > > > > My question was more about who and how this group is created,
> > > > > > Should n't we need API to create shared_group? If we do the following, at least, I can think, how it can be implemented in SW
> > or other HW.
> > > > > >
> > > > > > - Create aggregation queue group
> > > > > > - Attach multiple Rx queues to the aggregation queue group
> > > > > > - Pull the packets from the queue group(which internally fetch
> > > > > > from the Rx queues _attached_)
> > > > > >
> > > > > > Does the above kind of sequence, break your representor use case?
> > > > >
> > > > > Seems more like a set of EAL wrapper. Current API tries to minimize the application efforts to adapt shared-rxq.
> > > > > - step 1, not sure how important it is to create group with API, in rte_flow, group is created on demand.
> > > >
> > > > Which rte_flow pattern/action for this?
> > >
> > > No rte_flow for this, just recalled that the group in rte_flow is not created along with flow, not via api.
> > > I don’t see anything else to create along with group, just double whether it valuable to introduce a new api set to manage group.
> >
> > See below.
> >
> > >
> > > >
> > > > > - step 2, currently, the attaching is done in rte_eth_rx_queue_setup, specify offload and group in rx_conf struct.
> > > > > - step 3, define a dedicate api to receive packets from shared rxq? Looks clear to receive packets from shared rxq.
> > > > > currently, rxq objects in share group is same - the shared rxq, so the eth callback eth_rx_burst_t(rxq_obj, mbufs, n) could
> > > > > be used to receive packets from any ports in group, normally the first port(PF) in group.
> > > > > An alternative way is defining a vdev with same queue number and copy rxq objects will make the vdev a proxy of
> > > > > the shared rxq group - this could be an helper API.
> > > > >
> > > > > Anyway the wrapper doesn't break use case, step 3 api is more clear, need to understand how to implement efficiently.
> > > >
> > > > Are you doing this feature based on any HW support or it just pure
> > > > SW thing, If it is SW, It is better to have just new vdev for like drivers/net/bonding/. This we can help aggregate multiple Rxq across
> > the multiple ports of same the driver.
> > >
> > > Based on HW support.
> >
> > In Marvel HW, we do some support, I will outline here and some queries on this.
> >
> > # We need to create some new HW structure for aggregation # Connect each Rxq to the new HW structure for aggregation # Use
> > rx_burst from the new HW structure.
> >
> > Could you outline your HW support?
> >
> > Also, I am not able to understand how this will reduce the memory, atleast in our HW need creating more memory now to deal this as
> > we need to deal new HW structure.
> >
> > How is in your HW it reduces the memory? Also, if memory is the constraint, why NOT reduce the number of queues.
> >
>
> Glad to know that Marvel is working on this, what's the status of driver implementation?
>
> In my PMD implementation, it's very similar, a new HW object shared memory pool is created to replace per rxq memory pool.
> Legacy rxq feed queue with allocated mbufs as number of descriptors, now shared rxqs share the same pool, no need to supply
> mbufs for each rxq, just feed the shared rxq.
>
> So the memory saving reflects to mbuf per rxq, even 1000 representors in shared rxq group, the mbufs consumed is one rxq.
> In other words, new members in shared rxq doesn’t allocate new mbufs to feed rxq, just share with existing shared rxq(HW mempool).
> The memory required to setup each rxq doesn't change too much, agree.
We can ask the application to configure the same mempool for multiple
RQ too. RIght? If the saving is based on sharing the mempool
with multiple RQs.
>
> > # Also, I was thinking, one way to avoid the fast path or ABI change would like.
> >
> > # Driver Initializes one more eth_dev_ops in driver as aggregator ethdev # devargs of new ethdev or specific API like
> > drivers/net/bonding/rte_eth_bond.h can take the argument (port, queue) tuples which needs to aggregate by new ethdev port # No
> > change in fastpath or ABI is required in this model.
> >
>
> This could be an option to access shared rxq. What's the difference of the new PMD?
No ABI and fast change are required.
> What's the difference of PMD driver to create the new device?
>
> Is it important in your implementation? Does it work with existing rx_burst api?
Yes . It will work with the existing rx_burst API.
>
> >
> >
> > > Most user might uses PF in group as the anchor port to rx burst, current definition should be easy for them to migrate.
> > > but some user might prefer grouping some hot
> > > plug/unpluggedrepresentors, EAL could provide wrappers, users could do that either due to the strategy not complex enough.
> > Anyway, welcome any suggestion.
> > >
> > > >
> > > >
> > > > >
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > > /**
> > > > > > > > > * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
> > > > > > > > > * Only offloads set on rx_queue_offload_capa or
> > > > > > > > > rx_offload_capa @@ -1373,6 +1374,12 @@ struct rte_eth_conf
> > > > > > > > > { #define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM 0x00040000
> > > > > > > > > #define DEV_RX_OFFLOAD_RSS_HASH 0x00080000
> > > > > > > > > #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
> > > > > > > > > +/**
> > > > > > > > > + * Rx queue is shared among ports in same switch domain
> > > > > > > > > +to save memory,
> > > > > > > > > + * avoid polling each port. Any port in group can be used to receive packets.
> > > > > > > > > + * Real source port number saved in mbuf->port field.
> > > > > > > > > + */
> > > > > > > > > +#define RTE_ETH_RX_OFFLOAD_SHARED_RXQ 0x00200000
> > > > > > > > >
> > > > > > > > > #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
> > > > > > > > > DEV_RX_OFFLOAD_UDP_CKSUM
> > > > > > > > > | \
> > > > > > > > > --
> > > > > > > > > 2.25.1
> > > > > > > > >
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH 2/8] cryptodev: move inline APIs into separate structure
@ 2021-08-29 12:51 3% ` Akhil Goyal
0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-08-29 12:51 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, roy.fan.zhang, asomalap,
ruifeng.wang, ajit.khaparde, pablo.de.lara.guarch, fiona.trahe,
adwivedi, michaelsh, rnagadheeraj, jianjay.zhou, jerinj,
Akhil Goyal
Move fastpath inline function pointers from rte_cryptodev into a
separate structure accessed via a flat array.
The intension is to make rte_cryptodev and related structures private
to avoid future API/ABI breakages.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
lib/cryptodev/cryptodev_pmd.c | 33 ++++++++++++++++++++++++++++++
lib/cryptodev/cryptodev_pmd.h | 9 ++++++++
lib/cryptodev/rte_cryptodev.c | 3 +++
lib/cryptodev/rte_cryptodev_core.h | 19 +++++++++++++++++
lib/cryptodev/version.map | 4 ++++
5 files changed, 68 insertions(+)
diff --git a/lib/cryptodev/cryptodev_pmd.c b/lib/cryptodev/cryptodev_pmd.c
index 71e34140cd..46772dc355 100644
--- a/lib/cryptodev/cryptodev_pmd.c
+++ b/lib/cryptodev/cryptodev_pmd.c
@@ -158,3 +158,36 @@ rte_cryptodev_pmd_destroy(struct rte_cryptodev *cryptodev)
return 0;
}
+
+static uint16_t
+dummy_crypto_enqueue_burst(__rte_unused uint8_t dev_id,
+ __rte_unused uint8_t qp_id,
+ __rte_unused struct rte_crypto_op **ops,
+ __rte_unused uint16_t nb_ops)
+{
+ CDEV_LOG_ERR(
+ "crypto enqueue burst requested for unconfigured crypto device");
+ return 0;
+}
+
+static uint16_t
+dummy_crypto_dequeue_burst(__rte_unused uint8_t dev_id,
+ __rte_unused uint8_t qp_id,
+ __rte_unused struct rte_crypto_op **ops,
+ __rte_unused uint16_t nb_ops)
+{
+ CDEV_LOG_ERR(
+ "crypto enqueue burst requested for unconfigured crypto device");
+ return 0;
+}
+
+void
+rte_cryptodev_api_reset(struct rte_cryptodev_api *api)
+{
+ static const struct rte_cryptodev_api dummy = {
+ .enqueue_burst = dummy_crypto_enqueue_burst,
+ .dequeue_burst = dummy_crypto_dequeue_burst,
+ };
+
+ *api = dummy;
+}
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index f775ba6beb..eeaea13a23 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -520,6 +520,15 @@ RTE_INIT(init_ ##driver_id)\
driver_id = rte_cryptodev_allocate_driver(&crypto_drv, &(drv));\
}
+/**
+ * Reset crypto device fastpath APIs to dummy values.
+ *
+ * @param The *api* pointer to reset.
+ */
+__rte_internal
+void
+rte_cryptodev_api_reset(struct rte_cryptodev_api *api);
+
static inline void *
get_sym_session_private_data(const struct rte_cryptodev_sym_session *sess,
uint8_t driver_id) {
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index 9fa3aff1d3..26f8390668 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -54,6 +54,9 @@ static struct rte_cryptodev_global cryptodev_globals = {
.nb_devs = 0
};
+/* Public fastpath APIs. */
+struct rte_cryptodev_api *rte_cryptodev_api;
+
/* spinlock for crypto device callbacks */
static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
diff --git a/lib/cryptodev/rte_cryptodev_core.h b/lib/cryptodev/rte_cryptodev_core.h
index 1633e55889..ec38f70e0c 100644
--- a/lib/cryptodev/rte_cryptodev_core.h
+++ b/lib/cryptodev/rte_cryptodev_core.h
@@ -25,6 +25,25 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
struct rte_crypto_op **ops, uint16_t nb_ops);
/**< Enqueue packets for processing on queue pair of a device. */
+typedef uint16_t (*rte_crypto_dequeue_burst_t)(uint8_t dev_id, uint8_t qp_id,
+ struct rte_crypto_op **ops,
+ uint16_t nb_ops);
+/**< @internal Dequeue processed packets from queue pair of a device. */
+typedef uint16_t (*rte_crypto_enqueue_burst_t)(uint8_t dev_id, uint8_t qp_id,
+ struct rte_crypto_op **ops,
+ uint16_t nb_ops);
+/**< @internal Enqueue packets for processing on queue pair of a device. */
+
+struct rte_cryptodev_api {
+ rte_crypto_enqueue_burst_t enqueue_burst;
+ /**< PMD enqueue burst function. */
+ rte_crypto_dequeue_burst_t dequeue_burst;
+ /**< PMD dequeue burst function. */
+ uintptr_t reserved[6];
+} __rte_cache_aligned;
+
+extern struct rte_cryptodev_api *rte_cryptodev_api;
+
/**
* @internal
* The data part, with no function pointers, associated with each device.
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index 2fdf70002d..050089ae55 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -57,6 +57,9 @@ DPDK_22 {
rte_cryptodev_sym_session_init;
rte_cryptodevs;
+ #added in 21.11
+ rte_cryptodev_api;
+
local: *;
};
@@ -114,6 +117,7 @@ INTERNAL {
global:
rte_cryptodev_allocate_driver;
+ rte_cryptodev_api_reset;
rte_cryptodev_pmd_allocate;
rte_cryptodev_pmd_callback_process;
rte_cryptodev_pmd_create;
--
2.25.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] Marvell v21.11 Roadmap
@ 2021-08-29 8:48 4% Jerin Jacob Kollanukkaran
0 siblings, 0 replies; 200+ results
From: Jerin Jacob Kollanukkaran @ 2021-08-29 8:48 UTC (permalink / raw)
To: dev
Cc: Satananda Burla, Prasun Kapoor, GR-ipbu-jerinj-team, Radha Chintakuntla
EAL:
- add oops support
http://patches.dpdk.org/project/dpdk/patch/20210817032723.3997054-2-jerinj@marvell.com/
- make rte_intr_handle hidden for better ABI
https://patches.dpdk.org/project/dpdk/patch/20210826145726.102081-2-hkalra@marvell.com/
ethdev:
- mtr: enhance input color table features
http://patches.dpdk.org/project/dpdk/patch/20210820082401.3778736-1-jerinj@marvell.com/
- introduce IP reassembly in inline protocol offload
https://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
cryptodev:
- ABI improvements to hide internal interface
- Improve crypto/security session data structure to remove unnecessary indirection in data path.
https://patches.dpdk.org/project/dpdk/patch/20210731181327.660296-2-gakhil@marvell.com/
Security:
- Unit test application for IPsec
https://patches.dpdk.org/project/dpdk/list/?series=18253
- Known vector tests & combined mode tests
- AES-GCM
- IV generation
- Negative test ICV corruption
- UDP encapsulation
- Lifetime configuration & extension of rte_crypto_op for soft expiry notification
- IV gen disable for known vector tests with security outbound operations
- IP hdr verify & UDP port verification with security inbound operations
- Inner checksum computation/verification support in security operations
eventdev:
- eventdev: ABI rework to make driver interface internal.
https://patches.dpdk.org/project/dpdk/patch/20210823194020.1229-1-pbhagavatula@marvell.com/
- event vector support for l3fwd application
- event vector support ipsec-gw application
https://patches.dpdk.org/project/dpdk/patch/20210826100300.4007363-1-schalla@marvell.com/
net/cnxk:
- Inline IPSec support for both poll mode and event mode in CN10K and event mode in CN9K.(AES-GCM, AES-CBC SHA1)
- Egress Traffic manager support for CN9K and CN10k SoC's.
- Ingress Traffic metering and Policing support for CN10K with 3 level hierarchy.
crypto/cnxk:
- Lookaside protocol support with crypto_cn9k (AES-GCM)
- Crypto adapter support with crypto_cn9k & crypto_cn10k
- Lookaside protocol additional features,
- AES-CBC-SHA1 HMAC
- UDP encapsulation
- Transport
- ZUC 256
mempool/cnxk:
- add telemetry endpoints
event/cnxk:
- add telemetry endpoints
dma/cnxk:
- Add cnxk driver based on new DMA APIs
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2 01/15] ethdev: introduce shared Rx queue
2021-08-26 11:58 4% ` Jerin Jacob
@ 2021-08-28 14:16 0% ` Xueming(Steven) Li
2021-08-30 9:31 3% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Xueming(Steven) Li @ 2021-08-28 14:16 UTC (permalink / raw)
To: Jerin Jacob
Cc: dpdk-dev, Ferruh Yigit, NBU-Contact-Thomas Monjalon, Andrew Rybchenko
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Thursday, August 26, 2021 7:58 PM
> To: Xueming(Steven) Li <xuemingl@nvidia.com>
> Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
>
> On Thu, Aug 19, 2021 at 5:39 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Thursday, August 19, 2021 1:27 PM
> > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > > NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Andrew Rybchenko
> > > <andrew.rybchenko@oktetlabs.ru>
> > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > >
> > > On Wed, Aug 18, 2021 at 4:44 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > Sent: Tuesday, August 17, 2021 11:12 PM
> > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > > > >
> > > > > On Tue, Aug 17, 2021 at 5:01 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > > > >
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > Sent: Tuesday, August 17, 2021 5:33 PM
> > > > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx
> > > > > > > queue
> > > > > > >
> > > > > > > On Wed, Aug 11, 2021 at 7:34 PM Xueming Li <xuemingl@nvidia.com> wrote:
> > > > > > > >
> > > > > > > > In current DPDK framework, each RX queue is pre-loaded
> > > > > > > > with mbufs for incoming packets. When number of
> > > > > > > > representors scale out in a switch domain, the memory
> > > > > > > > consumption became significant. Most important, polling
> > > > > > > > all ports leads to high cache miss, high latency and low throughput.
> > > > > > > >
> > > > > > > > This patch introduces shared RX queue. Ports with same
> > > > > > > > configuration in a switch domain could share RX queue set by specifying sharing group.
> > > > > > > > Polling any queue using same shared RX queue receives
> > > > > > > > packets from all member ports. Source port is identified by mbuf->port.
> > > > > > > >
> > > > > > > > Port queue number in a shared group should be identical.
> > > > > > > > Queue index is
> > > > > > > > 1:1 mapped in shared group.
> > > > > > > >
> > > > > > > > Share RX queue must be polled on single thread or core.
> > > > > > > >
> > > > > > > > Multiple groups is supported by group ID.
> > > > > > > >
> > > > > > > > Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> > > > > > > > Cc: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > > ---
> > > > > > > > Rx queue object could be used as shared Rx queue object,
> > > > > > > > it's important to clear all queue control callback api that using queue object:
> > > > > > > >
> > > > > > > > https://mails.dpdk.org/archives/dev/2021-July/215574.html
> > > > > > >
> > > > > > > > #undef RTE_RX_OFFLOAD_BIT2STR diff --git
> > > > > > > > a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> > > > > > > > d2b27c351f..a578c9db9d 100644
> > > > > > > > --- a/lib/ethdev/rte_ethdev.h
> > > > > > > > +++ b/lib/ethdev/rte_ethdev.h
> > > > > > > > @@ -1047,6 +1047,7 @@ struct rte_eth_rxconf {
> > > > > > > > uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
> > > > > > > > uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
> > > > > > > > uint16_t rx_nseg; /**< Number of descriptions in rx_seg array.
> > > > > > > > */
> > > > > > > > + uint32_t shared_group; /**< Shared port group
> > > > > > > > + index in switch domain. */
> > > > > > >
> > > > > > > Not to able to see anyone setting/creating this group ID test application.
> > > > > > > How this group is created?
> > > > > >
> > > > > > Nice catch, the initial testpmd version only support one default group(0).
> > > > > > All ports that supports shared-rxq assigned in same group.
> > > > > >
> > > > > > We should be able to change "--rxq-shared" to "--rxq-shared-group"
> > > > > > to support group other than default.
> > > > > >
> > > > > > To support more groups simultaneously, need to consider
> > > > > > testpmd forwarding stream core assignment, all streams in same group need to stay on same core.
> > > > > > It's possible to specify how many ports to increase group
> > > > > > number, but user must schedule stream affinity carefully - error prone.
> > > > > >
> > > > > > On the other hand, one group should be sufficient for most
> > > > > > customer, the doubt is whether it valuable to support multiple groups test.
> > > > >
> > > > > Ack. One group is enough in testpmd.
> > > > >
> > > > > My question was more about who and how this group is created,
> > > > > Should n't we need API to create shared_group? If we do the following, at least, I can think, how it can be implemented in SW
> or other HW.
> > > > >
> > > > > - Create aggregation queue group
> > > > > - Attach multiple Rx queues to the aggregation queue group
> > > > > - Pull the packets from the queue group(which internally fetch
> > > > > from the Rx queues _attached_)
> > > > >
> > > > > Does the above kind of sequence, break your representor use case?
> > > >
> > > > Seems more like a set of EAL wrapper. Current API tries to minimize the application efforts to adapt shared-rxq.
> > > > - step 1, not sure how important it is to create group with API, in rte_flow, group is created on demand.
> > >
> > > Which rte_flow pattern/action for this?
> >
> > No rte_flow for this, just recalled that the group in rte_flow is not created along with flow, not via api.
> > I don’t see anything else to create along with group, just double whether it valuable to introduce a new api set to manage group.
>
> See below.
>
> >
> > >
> > > > - step 2, currently, the attaching is done in rte_eth_rx_queue_setup, specify offload and group in rx_conf struct.
> > > > - step 3, define a dedicate api to receive packets from shared rxq? Looks clear to receive packets from shared rxq.
> > > > currently, rxq objects in share group is same - the shared rxq, so the eth callback eth_rx_burst_t(rxq_obj, mbufs, n) could
> > > > be used to receive packets from any ports in group, normally the first port(PF) in group.
> > > > An alternative way is defining a vdev with same queue number and copy rxq objects will make the vdev a proxy of
> > > > the shared rxq group - this could be an helper API.
> > > >
> > > > Anyway the wrapper doesn't break use case, step 3 api is more clear, need to understand how to implement efficiently.
> > >
> > > Are you doing this feature based on any HW support or it just pure
> > > SW thing, If it is SW, It is better to have just new vdev for like drivers/net/bonding/. This we can help aggregate multiple Rxq across
> the multiple ports of same the driver.
> >
> > Based on HW support.
>
> In Marvel HW, we do some support, I will outline here and some queries on this.
>
> # We need to create some new HW structure for aggregation # Connect each Rxq to the new HW structure for aggregation # Use
> rx_burst from the new HW structure.
>
> Could you outline your HW support?
>
> Also, I am not able to understand how this will reduce the memory, atleast in our HW need creating more memory now to deal this as
> we need to deal new HW structure.
>
> How is in your HW it reduces the memory? Also, if memory is the constraint, why NOT reduce the number of queues.
>
Glad to know that Marvel is working on this, what's the status of driver implementation?
In my PMD implementation, it's very similar, a new HW object shared memory pool is created to replace per rxq memory pool.
Legacy rxq feed queue with allocated mbufs as number of descriptors, now shared rxqs share the same pool, no need to supply
mbufs for each rxq, just feed the shared rxq.
So the memory saving reflects to mbuf per rxq, even 1000 representors in shared rxq group, the mbufs consumed is one rxq.
In other words, new members in shared rxq doesn’t allocate new mbufs to feed rxq, just share with existing shared rxq(HW mempool).
The memory required to setup each rxq doesn't change too much, agree.
> # Also, I was thinking, one way to avoid the fast path or ABI change would like.
>
> # Driver Initializes one more eth_dev_ops in driver as aggregator ethdev # devargs of new ethdev or specific API like
> drivers/net/bonding/rte_eth_bond.h can take the argument (port, queue) tuples which needs to aggregate by new ethdev port # No
> change in fastpath or ABI is required in this model.
>
This could be an option to access shared rxq. What's the difference of the new PMD?
What's the difference of PMD driver to create the new device?
Is it important in your implementation? Does it work with existing rx_burst api?
>
>
> > Most user might uses PF in group as the anchor port to rx burst, current definition should be easy for them to migrate.
> > but some user might prefer grouping some hot
> > plug/unpluggedrepresentors, EAL could provide wrappers, users could do that either due to the strategy not complex enough.
> Anyway, welcome any suggestion.
> >
> > >
> > >
> > > >
> > > > >
> > > > >
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > > /**
> > > > > > > > * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
> > > > > > > > * Only offloads set on rx_queue_offload_capa or
> > > > > > > > rx_offload_capa @@ -1373,6 +1374,12 @@ struct rte_eth_conf
> > > > > > > > { #define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM 0x00040000
> > > > > > > > #define DEV_RX_OFFLOAD_RSS_HASH 0x00080000
> > > > > > > > #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
> > > > > > > > +/**
> > > > > > > > + * Rx queue is shared among ports in same switch domain
> > > > > > > > +to save memory,
> > > > > > > > + * avoid polling each port. Any port in group can be used to receive packets.
> > > > > > > > + * Real source port number saved in mbuf->port field.
> > > > > > > > + */
> > > > > > > > +#define RTE_ETH_RX_OFFLOAD_SHARED_RXQ 0x00200000
> > > > > > > >
> > > > > > > > #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
> > > > > > > > DEV_RX_OFFLOAD_UDP_CKSUM
> > > > > > > > | \
> > > > > > > > --
> > > > > > > > 2.25.1
> > > > > > > >
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ethdev: fix representor port ID search by name
@ 2021-08-27 9:18 0% ` Xueming(Steven) Li
0 siblings, 0 replies; 200+ results
From: Xueming(Steven) Li @ 2021-08-27 9:18 UTC (permalink / raw)
To: Andrew Rybchenko, Ajit Khaparde, Somnath Kotur, John Daley,
Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
Matan Azrad, Shahaf Shuler, Slava Ovsiienko,
NBU-Contact-Thomas Monjalon, Ferruh Yigit
Cc: dev, Viacheslav Galaktionov
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Wednesday, August 18, 2021 10:00 PM
> To: Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur <somnath.kotur@broadcom.com>; John Daley
> <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing <beilei.xing@intel.com>; Qiming Yang
> <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>; Haiyue Wang <haiyue.wang@intel.com>; Matan Azrad
> <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas
> Monjalon <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org; Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>; Xueming(Steven) Li <xuemingl@nvidia.com>
> Subject: [PATCH v2] ethdev: fix representor port ID search by name
>
> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>
> Getting a list of representors from a representor does not make sense.
> Instead, a parent device should be used.
>
> To this end, extend the rte_eth_dev_data structure to include the port ID of the parent device for representors.
>
> Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> ---
> The new field is added into the hole in rte_eth_dev_data structure.
> The patch does not change ABI, but extra care is required since ABI check is disabled for the structure because of the libabigail bug [1].
>
> Potentially it is bad for out-of-tree drivers which implement representors but do not fill in a new parert_port_id field in
> rte_eth_dev_data structure. Do we care?
>
> May be the patch should add lines to release notes, but I'd like to get initial feedback first.
>
> mlx5 changes should be reviwed by maintainers very carefully, since we are not sure if we patch it correctly.
>
> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
>
> drivers/net/bnxt/bnxt_reps.c | 1 +
> drivers/net/enic/enic_vf_representor.c | 1 +
> drivers/net/i40e/i40e_vf_representor.c | 1 +
> drivers/net/ice/ice_dcf_vf_representor.c | 1 + drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
> drivers/net/mlx5/linux/mlx5_os.c | 17 +++++++++++++++++
> drivers/net/mlx5/windows/mlx5_os.c | 17 +++++++++++++++++
> lib/ethdev/ethdev_driver.h | 6 +++---
> lib/ethdev/rte_class_eth.c | 22 ++++++++++++++++++++--
> lib/ethdev/rte_ethdev.c | 8 ++++----
> lib/ethdev/rte_ethdev_core.h | 4 ++++
> 11 files changed, 70 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c index bdbad53b7d..902591cd39 100644
> --- a/drivers/net/bnxt/bnxt_reps.c
> +++ b/drivers/net/bnxt/bnxt_reps.c
> @@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
> eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
> RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
> eth_dev->data->representor_id = rep_params->vf_id;
> + eth_dev->data->parent_port_id = rep_params->parent_dev->data->port_id;
>
> rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
> memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr, diff --git a/drivers/net/enic/enic_vf_representor.c
> b/drivers/net/enic/enic_vf_representor.c
> index 79dd6e5640..6ee7967ce9 100644
> --- a/drivers/net/enic/enic_vf_representor.c
> +++ b/drivers/net/enic/enic_vf_representor.c
> @@ -662,6 +662,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
> eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
> RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
> eth_dev->data->representor_id = vf->vf_id;
> + eth_dev->data->parent_port_id = pf->port_id;
> eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
> sizeof(struct rte_ether_addr) *
> ENIC_UNICAST_PERFECT_FILTERS, 0);
> diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
> index 0481b55381..865b637585 100644
> --- a/drivers/net/i40e/i40e_vf_representor.c
> +++ b/drivers/net/i40e/i40e_vf_representor.c
> @@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
> ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
> RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
> ethdev->data->representor_id = representor->vf_id;
> + ethdev->data->parent_port_id = pf->dev_data->parent_port_id;
>
> /* Setting the number queues allocated to the VF */
> ethdev->data->nb_rx_queues = vf->vsi->nb_qps; diff --git a/drivers/net/ice/ice_dcf_vf_representor.c
> b/drivers/net/ice/ice_dcf_vf_representor.c
> index 970461f3e9..c7cd3fd290 100644
> --- a/drivers/net/ice/ice_dcf_vf_representor.c
> +++ b/drivers/net/ice/ice_dcf_vf_representor.c
> @@ -418,6 +418,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
>
> vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
> vf_rep_eth_dev->data->representor_id = repr->vf_id;
> + vf_rep_eth_dev->data->parent_port_id =
> +repr->dcf_eth_dev->data->port_id;
>
> vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
>
> diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
> index d5b636a194..7a2063849e 100644
> --- a/drivers/net/ixgbe/ixgbe_vf_representor.c
> +++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
> @@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
>
> ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
> ethdev->data->representor_id = representor->vf_id;
> + ethdev->data->parent_port_id = representor->pf_ethdev->data->port_id;
>
> /* Set representor device ops */
> ethdev->dev_ops = &ixgbe_vf_representor_dev_ops; diff --git a/drivers/net/mlx5/linux/mlx5_os.c
> b/drivers/net/mlx5/linux/mlx5_os.c
> index 5f8766aa48..a68fa7beb7 100644
> --- a/drivers/net/mlx5/linux/mlx5_os.c
> +++ b/drivers/net/mlx5/linux/mlx5_os.c
> @@ -1677,6 +1677,23 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
> if (priv->representor) {
> eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
> eth_dev->data->representor_id = priv->representor_id;
> + MLX5_ETH_FOREACH_DEV(port_id, priv->pci_dev) {
> + struct mlx5_priv *opriv =
> + rte_eth_devices[port_id].data->dev_private;
> + if (opriv &&
> + opriv->master &&
> + opriv->domain_id == priv->domain_id &&
> + opriv->sh == priv->sh) {
> + eth_dev->data->parent_port_id =
> + rte_eth_devices[port_id].data->port_id;
> + break;
> + }
> + }
> + if (port_id >= RTE_MAX_ETHPORTS) {
> + DRV_LOG(ERR, "no master device for representor");
> + err = ENODEV;
> + goto error;
> + }
> }
> priv->mp_id.port_id = eth_dev->data->port_id;
> strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN); diff --git a/drivers/net/mlx5/windows/mlx5_os.c
> b/drivers/net/mlx5/windows/mlx5_os.c
> index 7e1df1c751..0c5a02bfcb 100644
> --- a/drivers/net/mlx5/windows/mlx5_os.c
> +++ b/drivers/net/mlx5/windows/mlx5_os.c
> @@ -543,6 +543,23 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
> if (priv->representor) {
> eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
> eth_dev->data->representor_id = priv->representor_id;
> + MLX5_ETH_FOREACH_DEV(port_id, priv->pci_dev) {
> + struct mlx5_priv *opriv =
> + rte_eth_devices[port_id].data->dev_private;
> + if (opriv &&
> + opriv->master &&
> + opriv->domain_id == priv->domain_id &&
> + opriv->sh == priv->sh) {
> + eth_dev->data->parent_port_id =
> + rte_eth_devices[port_id].data->port_id;
Could this value different than port_id?
> + break;
> + }
> + }
> + if (port_id >= RTE_MAX_ETHPORTS) {
> + DRV_LOG(ERR, "no master device for representor");
> + err = ENODEV;
> + goto error;
Here shouldn't be an error.
Parent port ID default to 0, it could be wrong if multiple PF probed, let's default to current port ID.
> + }
> }
> /*
> * Store associated network device interface index. This index diff --git a/lib/ethdev/ethdev_driver.h
> b/lib/ethdev/ethdev_driver.h index fd5b7ca550..d1a1499538 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -1287,8 +1287,8 @@ struct rte_eth_devargs {
> * For backward compatibility, if no representor info, direct
> * map legacy VF (no controller and pf).
> *
> - * @param ethdev
> - * Handle of ethdev port.
> + * @param port_id
> + * Port ID of the backing device.
> * @param type
> * Representor type.
> * @param controller
> @@ -1305,7 +1305,7 @@ struct rte_eth_devargs {
> */
> __rte_internal
> int
> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> +rte_eth_representor_id_get(uint16_t port_id,
> enum rte_eth_representor_type type,
> int controller, int pf, int representor_port,
> uint16_t *repr_id);
> diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c index 1fe5fa1f36..167d2d798c 100644
> --- a/lib/ethdev/rte_class_eth.c
> +++ b/lib/ethdev/rte_class_eth.c
> @@ -95,14 +95,32 @@ eth_representor_cmp(const char *key __rte_unused,
> c = i / (np * nf);
> p = (i / nf) % np;
> f = i % nf;
> - if (rte_eth_representor_id_get(edev,
> + /*
> + * rte_eth_representor_id_get expects to receive port ID of
> + * the master device, but in order to maintain compatibility
> + * with mlx5's hardware bonding and legacy representor
> + * specification using just VF numbers, the representor's port
> + * ID is tried first.
> + */
> + ret = rte_eth_representor_id_get(edev->data->port_id,
> eth_da.type,
> eth_da.nb_mh_controllers == 0 ? -1 :
> eth_da.mh_controllers[c],
> eth_da.nb_ports == 0 ? -1 : eth_da.ports[p],
> eth_da.nb_representor_ports == 0 ? -1 :
> eth_da.representor_ports[f],
> - &id) < 0)
> + &id);
> + if (ret == -ENOTSUP)
> + ret = rte_eth_representor_id_get(
> + edev->data->parent_port_id,
> + eth_da.type,
> + eth_da.nb_mh_controllers == 0 ? -1 :
> + eth_da.mh_controllers[c],
> + eth_da.nb_ports == 0 ? -1 : eth_da.ports[p],
> + eth_da.nb_representor_ports == 0 ? -1 :
> + eth_da.representor_ports[f],
> + &id);
> + if (ret < 0)
> continue;
> if (data->representor_id == id)
> return 0;
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 9d95cd11e1..228ef7bf23 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -5997,7 +5997,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da) }
>
> int
> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> +rte_eth_representor_id_get(uint16_t port_id,
> enum rte_eth_representor_type type,
> int controller, int pf, int representor_port,
> uint16_t *repr_id)
> @@ -6013,7 +6013,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> return -EINVAL;
>
> /* Get PMD representor range info. */
> - ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
> + ret = rte_eth_representor_info_get(port_id, NULL);
> if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
> controller == -1 && pf == -1) {
> /* Direct mapping for legacy VF representor. */ @@ -6028,7 +6028,7 @@ rte_eth_representor_id_get(const struct
> rte_eth_dev *ethdev,
> if (info == NULL)
> return -ENOMEM;
> info->nb_ranges_alloc = n;
> - ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
> + ret = rte_eth_representor_info_get(port_id, info);
> if (ret < 0)
> goto out;
>
> @@ -6047,7 +6047,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> continue;
> if (info->ranges[i].id_end < info->ranges[i].id_base) {
> RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
> - ethdev->data->port_id, info->ranges[i].id_base,
> + port_id, info->ranges[i].id_base,
> info->ranges[i].id_end, i);
> continue;
>
> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h index edf96de2dc..13cb84b52f 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -185,6 +185,10 @@ struct rte_eth_dev_data {
> /**< Switch-specific identifier.
> * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
> */
> + uint16_t parent_port_id;
> + /**< Port ID of the backing device.
> + * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
> + */
>
> pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
> uint64_t reserved_64s[4]; /**< Reserved for future fields */
> --
> 2.30.2
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH 01/38] common/sfc_efx/base: update MCDI headers
@ 2021-08-27 6:56 2% ` Andrew Rybchenko
0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev
Pickup new FW interface definitions.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
drivers/common/sfc_efx/base/efx_regs_mcdi.h | 1211 ++++++++++++++++++-
1 file changed, 1176 insertions(+), 35 deletions(-)
diff --git a/drivers/common/sfc_efx/base/efx_regs_mcdi.h b/drivers/common/sfc_efx/base/efx_regs_mcdi.h
index a3c9f076ec..2daf825a36 100644
--- a/drivers/common/sfc_efx/base/efx_regs_mcdi.h
+++ b/drivers/common/sfc_efx/base/efx_regs_mcdi.h
@@ -492,6 +492,24 @@
*/
#define MAE_FIELD_SUPPORTED_MATCH_MASK 0x5
+/* MAE_CT_VNI_MODE enum: Controls the layout of the VNI input to the conntrack
+ * lookup. (Values are not arbitrary - constrained by table access ABI.)
+ */
+/* enum: The VNI input to the conntrack lookup will be zero. */
+#define MAE_CT_VNI_MODE_ZERO 0x0
+/* enum: The VNI input to the conntrack lookup will be the VNI (VXLAN/Geneve)
+ * or VSID (NVGRE) field from the packet.
+ */
+#define MAE_CT_VNI_MODE_VNI 0x1
+/* enum: The VNI input to the conntrack lookup will be the VLAN ID from the
+ * outermost VLAN tag (in bottom 12 bits; top 12 bits zero).
+ */
+#define MAE_CT_VNI_MODE_1VLAN 0x2
+/* enum: The VNI input to the conntrack lookup will be the VLAN IDs from both
+ * VLAN tags (outermost in bottom 12 bits, innermost in top 12 bits).
+ */
+#define MAE_CT_VNI_MODE_2VLAN 0x3
+
/* MAE_FIELD enum: NB: this enum shares namespace with the support status enum.
*/
/* enum: Source mport upon entering the MAE. */
@@ -617,7 +635,8 @@
/* MAE_MCDI_ENCAP_TYPE enum: Encapsulation type. Defines how the payload will
* be parsed to an inner frame. Other values are reserved. Unknown values
- * should be treated same as NONE.
+ * should be treated same as NONE. (Values are not arbitrary - constrained by
+ * table access ABI.)
*/
#define MAE_MCDI_ENCAP_TYPE_NONE 0x0 /* enum */
/* enum: Don't assume enum aligns with support bitmask... */
@@ -634,6 +653,18 @@
/* enum: Selects the virtual NIC plugged into the MAE switch */
#define MAE_MPORT_END_VNIC 0x2
+/* MAE_COUNTER_TYPE enum: The datapath maintains several sets of counters, each
+ * being associated with a different table. Note that the same counter ID may
+ * be allocated by different counter blocks, so e.g. AR counter 42 is different
+ * from CT counter 42. Generation counts are also type-specific. This value is
+ * also present in the header of streaming counter packets, in the IDENTIFIER
+ * field (see packetiser packet format definitions).
+ */
+/* enum: Action Rule counters - can be referenced in AR response. */
+#define MAE_COUNTER_TYPE_AR 0x0
+/* enum: Conntrack counters - can be referenced in CT response. */
+#define MAE_COUNTER_TYPE_CT 0x1
+
/* MCDI_EVENT structuredef: The structure of an MCDI_EVENT on Siena/EF10/EF100
* platforms
*/
@@ -4547,6 +4578,8 @@
#define MC_CMD_MEDIA_BASE_T 0x6
/* enum: QSFP+. */
#define MC_CMD_MEDIA_QSFP_PLUS 0x7
+/* enum: DSFP. */
+#define MC_CMD_MEDIA_DSFP 0x8
#define MC_CMD_GET_PHY_CFG_OUT_MMD_MASK_OFST 48
#define MC_CMD_GET_PHY_CFG_OUT_MMD_MASK_LEN 4
/* enum: Native clause 22 */
@@ -7823,11 +7856,16 @@
/***********************************/
/* MC_CMD_GET_PHY_MEDIA_INFO
* Read media-specific data from PHY (e.g. SFP/SFP+ module ID information for
- * SFP+ PHYs). The 'media type' can be found via GET_PHY_CFG
- * (GET_PHY_CFG_OUT_MEDIA_TYPE); the valid 'page number' input values, and the
- * output data, are interpreted on a per-type basis. For SFP+: PAGE=0 or 1
+ * SFP+ PHYs). The "media type" can be found via GET_PHY_CFG
+ * (GET_PHY_CFG_OUT_MEDIA_TYPE); the valid "page number" input values, and the
+ * output data, are interpreted on a per-type basis. For SFP+, PAGE=0 or 1
* returns a 128-byte block read from module I2C address 0xA0 offset 0 or 0x80.
- * Anything else: currently undefined. Locks required: None. Return code: 0.
+ * For QSFP, PAGE=-1 is the lower (unbanked) page. PAGE=2 is the EEPROM and
+ * PAGE=3 is the module limits. For DSFP, module addressing requires a
+ * "BANK:PAGE". Not every bank has the same number of pages. See the Common
+ * Management Interface Specification (CMIS) for further details. A BANK:PAGE
+ * of "0xffff:0xffff" retrieves the lower (unbanked) page. Locks required -
+ * None. Return code - 0.
*/
#define MC_CMD_GET_PHY_MEDIA_INFO 0x4b
#define MC_CMD_GET_PHY_MEDIA_INFO_MSGSET 0x4b
@@ -7839,6 +7877,12 @@
#define MC_CMD_GET_PHY_MEDIA_INFO_IN_LEN 4
#define MC_CMD_GET_PHY_MEDIA_INFO_IN_PAGE_OFST 0
#define MC_CMD_GET_PHY_MEDIA_INFO_IN_PAGE_LEN 4
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_OFST 0
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_LBN 0
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_WIDTH 16
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_OFST 0
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_LBN 16
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_WIDTH 16
/* MC_CMD_GET_PHY_MEDIA_INFO_OUT msgresponse */
#define MC_CMD_GET_PHY_MEDIA_INFO_OUT_LENMIN 5
@@ -9350,6 +9394,8 @@
#define NVRAM_PARTITION_TYPE_FPGA_JUMP 0xb08
/* enum: FPGA Validate XCLBIN */
#define NVRAM_PARTITION_TYPE_FPGA_XCLBIN_VALIDATE 0xb09
+/* enum: FPGA XOCL Configuration information */
+#define NVRAM_PARTITION_TYPE_FPGA_XOCL_CONFIG 0xb0a
/* enum: MUM firmware partition */
#define NVRAM_PARTITION_TYPE_MUM_FIRMWARE 0xc00
/* enum: SUC firmware partition (this is intentionally an alias of
@@ -9427,6 +9473,8 @@
#define NVRAM_PARTITION_TYPE_BUNDLE_LOG 0x1e02
/* enum: Partition for Solarflare gPXE bootrom installed via Bundle update. */
#define NVRAM_PARTITION_TYPE_EXPANSION_ROM_INTERNAL 0x1e03
+/* enum: Partition to store ASN.1 format Bundle Signature for checking. */
+#define NVRAM_PARTITION_TYPE_BUNDLE_SIGNATURE 0x1e04
/* enum: Test partition on SmartNIC system microcontroller (SUC) */
#define NVRAM_PARTITION_TYPE_SUC_TEST 0x1f00
/* enum: System microcontroller access to primary FPGA flash. */
@@ -10051,6 +10099,158 @@
#define MC_CMD_INIT_EVQ_V2_OUT_FLAG_RXQ_FORCE_EV_MERGING_LBN 3
#define MC_CMD_INIT_EVQ_V2_OUT_FLAG_RXQ_FORCE_EV_MERGING_WIDTH 1
+/* MC_CMD_INIT_EVQ_V3_IN msgrequest: Extended request to specify per-queue
+ * event merge timeouts.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_LEN 556
+/* Size, in entries */
+#define MC_CMD_INIT_EVQ_V3_IN_SIZE_OFST 0
+#define MC_CMD_INIT_EVQ_V3_IN_SIZE_LEN 4
+/* Desired instance. Must be set to a specific instance, which is a function
+ * local queue index. The calling client must be the currently-assigned user of
+ * this VI (see MC_CMD_SET_VI_USER).
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_INSTANCE_OFST 4
+#define MC_CMD_INIT_EVQ_V3_IN_INSTANCE_LEN 4
+/* The initial timer value. The load value is ignored if the timer mode is DIS.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_LOAD_OFST 8
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_LOAD_LEN 4
+/* The reload value is ignored in one-shot modes */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_RELOAD_OFST 12
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_RELOAD_LEN 4
+/* tbd */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAGS_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAGS_LEN 4
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_LBN 0
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_LBN 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_LBN 2
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_LBN 3
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_LBN 4
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_LBN 5
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_LBN 6
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_LBN 7
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_WIDTH 4
+/* enum: All initialisation flags specified by host. */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_MANUAL 0x0
+/* enum: MEDFORD only. Certain initialisation flags specified by host may be
+ * over-ridden by firmware based on licenses and firmware variant in order to
+ * provide the lowest latency achievable. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_LOW_LATENCY 0x1
+/* enum: MEDFORD only. Certain initialisation flags specified by host may be
+ * over-ridden by firmware based on licenses and firmware variant in order to
+ * provide the best throughput achievable. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_THROUGHPUT 0x2
+/* enum: MEDFORD only. Certain initialisation flags may be over-ridden by
+ * firmware based on licenses and firmware variant. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_AUTO 0x3
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_LBN 11
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_OFST 20
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_LEN 4
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_DIS 0x0
+/* enum: Immediate */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_IMMED_START 0x1
+/* enum: Triggered */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_TRIG_START 0x2
+/* enum: Hold-off */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_INT_HLDOFF 0x3
+/* Target EVQ for wakeups if in wakeup mode. */
+#define MC_CMD_INIT_EVQ_V3_IN_TARGET_EVQ_OFST 24
+#define MC_CMD_INIT_EVQ_V3_IN_TARGET_EVQ_LEN 4
+/* Target interrupt if in interrupting mode (note union with target EVQ). Use
+ * MC_CMD_RESOURCE_INSTANCE_ANY unless a specific one required for test
+ * purposes.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_IRQ_NUM_OFST 24
+#define MC_CMD_INIT_EVQ_V3_IN_IRQ_NUM_LEN 4
+/* Event Counter Mode. */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_OFST 28
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_LEN 4
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_DIS 0x0
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_RX 0x1
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_TX 0x2
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_RXTX 0x3
+/* Event queue packet count threshold. */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_THRSHLD_OFST 32
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_THRSHLD_LEN 4
+/* 64-bit address of 4k of 4k-aligned host memory buffer */
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_OFST 36
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LEN 8
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_OFST 36
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_LEN 4
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_LBN 288
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_WIDTH 32
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_OFST 40
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_LEN 4
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_LBN 320
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_WIDTH 32
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MINNUM 1
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MAXNUM_MCDI2 64
+/* Receive event merge timeout to configure, in nanoseconds. The valid range
+ * and granularity are device specific. Specify 0 to use the firmware's default
+ * value. This field is ignored and per-queue merging is disabled if
+ * MC_CMD_INIT_EVQ/MC_CMD_INIT_EVQ_IN/FLAG_RX_MERGE is not set.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_RX_MERGE_TIMEOUT_NS_OFST 548
+#define MC_CMD_INIT_EVQ_V3_IN_RX_MERGE_TIMEOUT_NS_LEN 4
+/* Transmit event merge timeout to configure, in nanoseconds. The valid range
+ * and granularity are device specific. Specify 0 to use the firmware's default
+ * value. This field is ignored and per-queue merging is disabled if
+ * MC_CMD_INIT_EVQ/MC_CMD_INIT_EVQ_IN/FLAG_TX_MERGE is not set.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_TX_MERGE_TIMEOUT_NS_OFST 552
+#define MC_CMD_INIT_EVQ_V3_IN_TX_MERGE_TIMEOUT_NS_LEN 4
+
+/* MC_CMD_INIT_EVQ_V3_OUT msgresponse */
+#define MC_CMD_INIT_EVQ_V3_OUT_LEN 8
+/* Only valid if INTRFLAG was true */
+#define MC_CMD_INIT_EVQ_V3_OUT_IRQ_OFST 0
+#define MC_CMD_INIT_EVQ_V3_OUT_IRQ_LEN 4
+/* Actual configuration applied on the card */
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAGS_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAGS_LEN 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_LBN 0
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_LBN 1
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_LBN 2
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_LBN 3
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_WIDTH 1
+
/* QUEUE_CRC_MODE structuredef */
#define QUEUE_CRC_MODE_LEN 1
#define QUEUE_CRC_MODE_MODE_LBN 0
@@ -10256,7 +10456,9 @@
#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_NUM 64
+#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MINNUM 0
+#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
#define MC_CMD_INIT_RXQ_EXT_IN_SNAPSHOT_LENGTH_OFST 540
#define MC_CMD_INIT_RXQ_EXT_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10360,7 +10562,9 @@
#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_NUM 64
+#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MINNUM 0
+#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
#define MC_CMD_INIT_RXQ_V3_IN_SNAPSHOT_LENGTH_OFST 540
#define MC_CMD_INIT_RXQ_V3_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10493,7 +10697,9 @@
#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_NUM 64
+#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MINNUM 0
+#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
#define MC_CMD_INIT_RXQ_V4_IN_SNAPSHOT_LENGTH_OFST 540
#define MC_CMD_INIT_RXQ_V4_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10639,7 +10845,9 @@
#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_NUM 64
+#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MINNUM 0
+#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
#define MC_CMD_INIT_RXQ_V5_IN_SNAPSHOT_LENGTH_OFST 540
#define MC_CMD_INIT_RXQ_V5_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10878,7 +11086,7 @@
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MINNUM 1
+#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MINNUM 0
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MAXNUM 64
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Flags related to Qbb flow control mode. */
@@ -12228,6 +12436,8 @@
* rules inserted by MC_CMD_VNIC_ENCAP_RULE_ADD. (ef100 and later)
*/
#define MC_CMD_GET_PARSER_DISP_INFO_IN_OP_GET_SUPPORTED_VNIC_ENCAP_MATCHES 0x5
+/* enum: read the supported encapsulation types for the VNIC */
+#define MC_CMD_GET_PARSER_DISP_INFO_IN_OP_GET_SUPPORTED_VNIC_ENCAP_TYPES 0x6
/* MC_CMD_GET_PARSER_DISP_INFO_OUT msgresponse */
#define MC_CMD_GET_PARSER_DISP_INFO_OUT_LENMIN 8
@@ -12336,6 +12546,30 @@
#define MC_CMD_GET_PARSER_DISP_VNIC_ENCAP_MATCHES_OUT_SUPPORTED_MATCHES_MAXNUM 61
#define MC_CMD_GET_PARSER_DISP_VNIC_ENCAP_MATCHES_OUT_SUPPORTED_MATCHES_MAXNUM_MCDI2 253
+/* MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT msgresponse: Returns
+ * the supported encapsulation types for the VNIC
+ */
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_LEN 8
+/* The op code OP_GET_SUPPORTED_VNIC_ENCAP_TYPES is returned */
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_OP_OFST 0
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_OP_LEN 4
+/* Enum values, see field(s): */
+/* MC_CMD_GET_PARSER_DISP_INFO_IN/OP */
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPES_SUPPORTED_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPES_SUPPORTED_LEN 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_LBN 0
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_WIDTH 1
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_LBN 1
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_WIDTH 1
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_LBN 2
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_WIDTH 1
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_LBN 3
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
+
/***********************************/
/* MC_CMD_PARSER_DISP_RW
@@ -16236,6 +16470,9 @@
#define MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
#define MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
#define MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
/* MC_CMD_GET_CAPABILITIES_V8_OUT msgresponse */
#define MC_CMD_GET_CAPABILITIES_V8_OUT_LEN 160
@@ -16734,6 +16971,9 @@
#define MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
#define MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
#define MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
/* These bits are reserved for communicating test-specific capabilities to
* host-side test software. All production drivers should treat this field as
* opaque.
@@ -17246,6 +17486,9 @@
#define MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
#define MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
#define MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
/* These bits are reserved for communicating test-specific capabilities to
* host-side test software. All production drivers should treat this field as
* opaque.
@@ -17793,6 +18036,9 @@
#define MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
#define MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
#define MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
/* These bits are reserved for communicating test-specific capabilities to
* host-side test software. All production drivers should treat this field as
* opaque.
@@ -19900,6 +20146,18 @@
#define MC_CMD_GET_FUNCTION_INFO_OUT_VF_OFST 4
#define MC_CMD_GET_FUNCTION_INFO_OUT_VF_LEN 4
+/* MC_CMD_GET_FUNCTION_INFO_OUT_V2 msgresponse */
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_LEN 12
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_PF_OFST 0
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_PF_LEN 4
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_VF_OFST 4
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_VF_LEN 4
+/* Values from PCIE_INTERFACE enumeration. For NICs with a single interface, or
+ * in the case of a V1 response, this should be HOST_PRIMARY.
+ */
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_INTF_OFST 8
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_INTF_LEN 4
+
/***********************************/
/* MC_CMD_ENABLE_OFFLINE_BIST
@@ -25682,6 +25940,9 @@
#define MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_OFST 0
#define MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_LBN 6
#define MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_WIDTH 1
+#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_OFST 0
+#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_LBN 7
+#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_WIDTH 1
#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_OFST 0
#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_LBN 7
#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_WIDTH 1
@@ -25691,6 +25952,12 @@
#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_OFST 0
#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_LBN 9
#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_WIDTH 1
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_OFST 0
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_LBN 10
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_WIDTH 1
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_OFST 0
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_LBN 11
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_WIDTH 1
/* MC_CMD_GET_RX_PREFIX_ID_OUT msgresponse */
#define MC_CMD_GET_RX_PREFIX_ID_OUT_LENMIN 8
@@ -25736,9 +26003,12 @@
#define RX_PREFIX_FIELD_INFO_PARTIAL_TSTAMP 0x4 /* enum */
#define RX_PREFIX_FIELD_INFO_RSS_HASH 0x5 /* enum */
#define RX_PREFIX_FIELD_INFO_USER_MARK 0x6 /* enum */
+#define RX_PREFIX_FIELD_INFO_INGRESS_MPORT 0x7 /* enum */
#define RX_PREFIX_FIELD_INFO_INGRESS_VPORT 0x7 /* enum */
#define RX_PREFIX_FIELD_INFO_CSUM_FRAME 0x8 /* enum */
#define RX_PREFIX_FIELD_INFO_VLAN_STRIP_TCI 0x9 /* enum */
+#define RX_PREFIX_FIELD_INFO_VLAN_STRIPPED 0xa /* enum */
+#define RX_PREFIX_FIELD_INFO_VSWITCH_STATUS 0xb /* enum */
#define RX_PREFIX_FIELD_INFO_TYPE_LBN 24
#define RX_PREFIX_FIELD_INFO_TYPE_WIDTH 8
@@ -26063,6 +26333,10 @@
#define MC_CMD_FPGA_IN_OP_SET_INTERNAL_LINK 0x5
/* enum: Read internal link configuration. */
#define MC_CMD_FPGA_IN_OP_GET_INTERNAL_LINK 0x6
+/* enum: Get MAC statistics of FPGA external port. */
+#define MC_CMD_FPGA_IN_OP_GET_MAC_STATS 0x7
+/* enum: Set configuration on internal FPGA MAC. */
+#define MC_CMD_FPGA_IN_OP_SET_INTERNAL_MAC 0x8
/* MC_CMD_FPGA_OP_GET_VERSION_IN msgrequest: Get the FPGA version string. A
* free-format string is returned in response to this command. Any checks on
@@ -26206,6 +26480,87 @@
#define MC_CMD_FPGA_OP_GET_INTERNAL_LINK_OUT_SPEED_OFST 4
#define MC_CMD_FPGA_OP_GET_INTERNAL_LINK_OUT_SPEED_LEN 4
+/* MC_CMD_FPGA_OP_GET_MAC_STATS_IN msgrequest: Get FPGA external port MAC
+ * statistics.
+ */
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_IN_LEN 4
+/* Sub-command code. Must be OP_GET_MAC_STATS. */
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_IN_OP_OFST 0
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_IN_OP_LEN 4
+
+/* MC_CMD_FPGA_OP_GET_MAC_STATS_OUT msgresponse */
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMIN 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMAX 252
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LEN(num) (4+8*(num))
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_NUM(len) (((len)-4)/8)
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_NUM_STATS_OFST 0
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_NUM_STATS_LEN 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_OFST 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LEN 8
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_OFST 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_LEN 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_LBN 32
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_WIDTH 32
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_OFST 8
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_LEN 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_LBN 64
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_WIDTH 32
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MINNUM 0
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MAXNUM 31
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MAXNUM_MCDI2 127
+#define MC_CMD_FPGA_MAC_TX_TOTAL_PACKETS 0x0 /* enum */
+#define MC_CMD_FPGA_MAC_TX_TOTAL_BYTES 0x1 /* enum */
+#define MC_CMD_FPGA_MAC_TX_TOTAL_GOOD_PACKETS 0x2 /* enum */
+#define MC_CMD_FPGA_MAC_TX_TOTAL_GOOD_BYTES 0x3 /* enum */
+#define MC_CMD_FPGA_MAC_TX_BAD_FCS 0x4 /* enum */
+#define MC_CMD_FPGA_MAC_TX_PAUSE 0x5 /* enum */
+#define MC_CMD_FPGA_MAC_TX_USER_PAUSE 0x6 /* enum */
+#define MC_CMD_FPGA_MAC_RX_TOTAL_PACKETS 0x7 /* enum */
+#define MC_CMD_FPGA_MAC_RX_TOTAL_BYTES 0x8 /* enum */
+#define MC_CMD_FPGA_MAC_RX_TOTAL_GOOD_PACKETS 0x9 /* enum */
+#define MC_CMD_FPGA_MAC_RX_TOTAL_GOOD_BYTES 0xa /* enum */
+#define MC_CMD_FPGA_MAC_RX_BAD_FCS 0xb /* enum */
+#define MC_CMD_FPGA_MAC_RX_PAUSE 0xc /* enum */
+#define MC_CMD_FPGA_MAC_RX_USER_PAUSE 0xd /* enum */
+#define MC_CMD_FPGA_MAC_RX_UNDERSIZE 0xe /* enum */
+#define MC_CMD_FPGA_MAC_RX_OVERSIZE 0xf /* enum */
+#define MC_CMD_FPGA_MAC_RX_FRAMING_ERR 0x10 /* enum */
+#define MC_CMD_FPGA_MAC_FEC_UNCORRECTED_ERRORS 0x11 /* enum */
+#define MC_CMD_FPGA_MAC_FEC_CORRECTED_ERRORS 0x12 /* enum */
+
+/* MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN msgrequest: Configures the internal port
+ * MAC on the FPGA.
+ */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_LEN 20
+/* Sub-command code. Must be OP_SET_INTERNAL_MAC. */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_OP_OFST 0
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_OP_LEN 4
+/* Select which parameters to configure. */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CONTROL_OFST 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CONTROL_LEN 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_OFST 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_LBN 0
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_WIDTH 1
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_OFST 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_LBN 1
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_WIDTH 1
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_OFST 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_LBN 2
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_WIDTH 1
+/* The MTU to be programmed into the MAC. */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_MTU_OFST 8
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_MTU_LEN 4
+/* Drain Tx FIFO */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_DRAIN_OFST 12
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_DRAIN_LEN 4
+/* flow control configuration. See MC_CMD_SET_MAC/MC_CMD_SET_MAC_IN/FCNTL. */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_FCNTL_OFST 16
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_FCNTL_LEN 4
+
+/* MC_CMD_FPGA_OP_SET_INTERNAL_MAC_OUT msgresponse */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_OUT_LEN 0
+
/***********************************/
/* MC_CMD_EXTERNAL_MAE_GET_LINK_MODE
@@ -26483,6 +26838,12 @@
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_OFST 29
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_LBN 0
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_WIDTH 1
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_OFST 29
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_LBN 1
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_WIDTH 1
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_OFST 29
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_LBN 2
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_WIDTH 1
/* Only if MATCH_DST_PORT is set. Port number as bytes in network order. */
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_DST_PORT_OFST 30
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_DST_PORT_LEN 2
@@ -26544,6 +26905,257 @@
#define UUID_NODE_LBN 80
#define UUID_NODE_WIDTH 48
+
+/***********************************/
+/* MC_CMD_PLUGIN_ALLOC
+ * Create a handle to a datapath plugin's extension. This involves finding a
+ * currently-loaded plugin offering the given functionality (as identified by
+ * the UUID) and allocating a handle to track the usage of it. Plugin
+ * functionality is identified by 'extension' rather than any other identifier
+ * so that a single plugin bitfile may offer more than one piece of independent
+ * functionality. If two bitfiles are loaded which both offer the same
+ * extension, then the metadata is interrogated further to determine which is
+ * the newest and that is the one opened. See SF-123625-SW for architectural
+ * detail on datapath plugins.
+ */
+#define MC_CMD_PLUGIN_ALLOC 0x1ad
+#define MC_CMD_PLUGIN_ALLOC_MSGSET 0x1ad
+#undef MC_CMD_0x1ad_PRIVILEGE_CTG
+
+#define MC_CMD_0x1ad_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_ALLOC_IN msgrequest */
+#define MC_CMD_PLUGIN_ALLOC_IN_LEN 24
+/* The functionality requested of the plugin, as a UUID structure */
+#define MC_CMD_PLUGIN_ALLOC_IN_UUID_OFST 0
+#define MC_CMD_PLUGIN_ALLOC_IN_UUID_LEN 16
+/* Additional options for opening the handle */
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAGS_OFST 16
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAGS_LEN 4
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_OFST 16
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_LBN 0
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_WIDTH 1
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_OFST 16
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_LBN 1
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_WIDTH 1
+/* Load the extension only if it is in the specified administrative group.
+ * Specify ANY to load the extension wherever it is found (if there are
+ * multiple choices then the extension with the highest MINOR_VER/PATCH_VER
+ * will be loaded). See MC_CMD_PLUGIN_GET_META_GLOBAL for a description of
+ * administrative groups.
+ */
+#define MC_CMD_PLUGIN_ALLOC_IN_ADMIN_GROUP_OFST 20
+#define MC_CMD_PLUGIN_ALLOC_IN_ADMIN_GROUP_LEN 2
+/* enum: Load the extension from any ADMIN_GROUP. */
+#define MC_CMD_PLUGIN_ALLOC_IN_ANY 0xffff
+/* Reserved */
+#define MC_CMD_PLUGIN_ALLOC_IN_RESERVED_OFST 22
+#define MC_CMD_PLUGIN_ALLOC_IN_RESERVED_LEN 2
+
+/* MC_CMD_PLUGIN_ALLOC_OUT msgresponse */
+#define MC_CMD_PLUGIN_ALLOC_OUT_LEN 4
+/* Unique identifier of this usage */
+#define MC_CMD_PLUGIN_ALLOC_OUT_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_ALLOC_OUT_HANDLE_LEN 4
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_FREE
+ * Delete a handle to a plugin's extension.
+ */
+#define MC_CMD_PLUGIN_FREE 0x1ae
+#define MC_CMD_PLUGIN_FREE_MSGSET 0x1ae
+#undef MC_CMD_0x1ae_PRIVILEGE_CTG
+
+#define MC_CMD_0x1ae_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_FREE_IN msgrequest */
+#define MC_CMD_PLUGIN_FREE_IN_LEN 4
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_FREE_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_FREE_IN_HANDLE_LEN 4
+
+/* MC_CMD_PLUGIN_FREE_OUT msgresponse */
+#define MC_CMD_PLUGIN_FREE_OUT_LEN 0
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_GLOBAL
+ * Returns the global metadata applying to the whole plugin extension. See the
+ * other metadata calls for subtypes of data.
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL 0x1af
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_MSGSET 0x1af
+#undef MC_CMD_0x1af_PRIVILEGE_CTG
+
+#define MC_CMD_0x1af_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_GLOBAL_IN msgrequest */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_IN_LEN 4
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_IN_HANDLE_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_GLOBAL_OUT msgresponse */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_LEN 36
+/* Unique identifier of this plugin extension. This is identical to the value
+ * which was requested when the handle was allocated.
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_UUID_OFST 0
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_UUID_LEN 16
+/* semver sub-version of this plugin extension */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MINOR_VER_OFST 16
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MINOR_VER_LEN 2
+/* semver micro-version of this plugin extension */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PATCH_VER_OFST 18
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PATCH_VER_LEN 2
+/* Number of different messages which can be sent to this extension */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_NUM_MSGS_OFST 20
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_NUM_MSGS_LEN 4
+/* Byte offset within the VI window of the plugin's mapped CSR window. */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_OFFSET_OFST 24
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_OFFSET_LEN 2
+/* Number of bytes mapped through to the plugin's CSRs. 0 if that feature was
+ * not requested by the plugin (in which case MAPPED_CSR_OFFSET and
+ * MAPPED_CSR_FLAGS are ignored).
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_SIZE_OFST 26
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_SIZE_LEN 2
+/* Flags indicating how to perform the CSR window mapping. */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAGS_OFST 28
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAGS_LEN 4
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_OFST 28
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_LBN 0
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_WIDTH 1
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_OFST 28
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_LBN 1
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_WIDTH 1
+/* Identifier of the set of extensions which all change state together.
+ * Extensions having the same ADMIN_GROUP will always load and unload at the
+ * same time. ADMIN_GROUP values themselves are arbitrary (but they contain a
+ * generation number as an implementation detail to ensure that they're not
+ * reused rapidly).
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_ADMIN_GROUP_OFST 32
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_ADMIN_GROUP_LEN 1
+/* Bitshift in MC_CMD_DEVEL_CLIENT_PRIVILEGE_MODIFY's MASK parameters
+ * corresponding to this extension, i.e. set the bit 1<<PRIVILEGE_BIT to permit
+ * access to this extension.
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PRIVILEGE_BIT_OFST 33
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PRIVILEGE_BIT_LEN 1
+/* Reserved */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_RESERVED_OFST 34
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_RESERVED_LEN 2
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER
+ * Returns metadata supplied by the plugin author which describes this
+ * extension in a human-readable way. Contrast with
+ * MC_CMD_PLUGIN_GET_META_GLOBAL, which returns information needed for software
+ * to operate.
+ */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER 0x1b0
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_MSGSET 0x1b0
+#undef MC_CMD_0x1b0_PRIVILEGE_CTG
+
+#define MC_CMD_0x1b0_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER_IN msgrequest */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_LEN 12
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_HANDLE_LEN 4
+/* Category of data to return */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_SUBTYPE_OFST 4
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_SUBTYPE_LEN 4
+/* enum: Top-level information about the extension. The returned data is an
+ * array of key/value pairs using the keys in RFC5013 (Dublin Core) to describe
+ * the extension. The data is a back-to-back list of zero-terminated strings;
+ * the even-numbered fields (0,2,4,...) are keys and their following odd-
+ * numbered fields are the corresponding values. Both keys and values are
+ * nominally UTF-8. Per RFC5013, the same key may be repeated any number of
+ * times. Note that all information (including the key/value structure itself
+ * and the UTF-8 encoding) may have been provided by the plugin author, so
+ * callers must be cautious about parsing it. Callers should parse only the
+ * top-level structure to separate out the keys and values; the contents of the
+ * values is not expected to be machine-readable.
+ */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_EXTENSION_KVS 0x0
+/* Byte position of the data to be returned within the full data block of the
+ * given SUBTYPE.
+ */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_OFFSET_OFST 8
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_OFFSET_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT msgresponse */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMIN 4
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMAX 252
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LEN(num) (4+1*(num))
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_NUM(len) (((len)-4)/1)
+/* Full length of the data block of the requested SUBTYPE, in bytes. */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_TOTAL_SIZE_OFST 0
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_TOTAL_SIZE_LEN 4
+/* The information requested by SUBTYPE. */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_OFST 4
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_LEN 1
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MINNUM 0
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MAXNUM 248
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MAXNUM_MCDI2 1016
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_MSG
+ * Returns the simple metadata for a specific plugin request message. This
+ * supplies information necessary for the host to know how to build an
+ * MC_CMD_PLUGIN_REQ request.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG 0x1b1
+#define MC_CMD_PLUGIN_GET_META_MSG_MSGSET 0x1b1
+#undef MC_CMD_0x1b1_PRIVILEGE_CTG
+
+#define MC_CMD_0x1b1_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_MSG_IN msgrequest */
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_LEN 8
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_HANDLE_LEN 4
+/* Unique message ID to obtain */
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_ID_OFST 4
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_ID_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_MSG_OUT msgresponse */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_LEN 44
+/* Unique message ID. This is the same value as the input parameter; it exists
+ * to allow future MCDI extensions which enumerate all messages.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_ID_OFST 0
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_ID_LEN 4
+/* Packed index number of this message, assigned by the MC to give each message
+ * a unique ID in an array to allow for more efficient storage/management.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_INDEX_OFST 4
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_INDEX_LEN 4
+/* Short human-readable codename for this message. This is conventionally
+ * formatted as a C identifier in the basic ASCII character set with any spare
+ * bytes at the end set to 0, however this convention is not enforced by the MC
+ * so consumers must check for all potential malformations before using it for
+ * a trusted purpose.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_NAME_OFST 8
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_NAME_LEN 32
+/* Number of bytes of data which must be passed from the host kernel to the MC
+ * for this message's payload, and which are passed back again in the response.
+ * The MC's plugin metadata loader will have validated that the number of bytes
+ * specified here will fit in to MC_CMD_PLUGIN_REQ_IN_DATA in a single MCDI
+ * message.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_DATA_SIZE_OFST 40
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_DATA_SIZE_LEN 4
+
/* PLUGIN_EXTENSION structuredef: Used within MC_CMD_PLUGIN_GET_ALL to describe
* an individual extension.
*/
@@ -26561,6 +27173,100 @@
#define PLUGIN_EXTENSION_RESERVED_LBN 137
#define PLUGIN_EXTENSION_RESERVED_WIDTH 23
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_ALL
+ * Returns a list of all plugin extensions currently loaded and available. The
+ * UUIDs returned can be passed to MC_CMD_PLUGIN_ALLOC in order to obtain more
+ * detailed metadata via the MC_CMD_PLUGIN_GET_META_* family of requests. The
+ * ADMIN_GROUP field collects how extensions are grouped in to units which are
+ * loaded/unloaded together; extensions with the same value are in the same
+ * group.
+ */
+#define MC_CMD_PLUGIN_GET_ALL 0x1b2
+#define MC_CMD_PLUGIN_GET_ALL_MSGSET 0x1b2
+#undef MC_CMD_0x1b2_PRIVILEGE_CTG
+
+#define MC_CMD_0x1b2_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_ALL_IN msgrequest */
+#define MC_CMD_PLUGIN_GET_ALL_IN_LEN 4
+/* Additional options for querying. Note that if neither FLAG_INCLUDE_ENABLED
+ * nor FLAG_INCLUDE_DISABLED are specified then the result set will be empty.
+ */
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAGS_OFST 0
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAGS_LEN 4
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_OFST 0
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_LBN 0
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_WIDTH 1
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_OFST 0
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_LBN 1
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_WIDTH 1
+
+/* MC_CMD_PLUGIN_GET_ALL_OUT msgresponse */
+#define MC_CMD_PLUGIN_GET_ALL_OUT_LENMIN 0
+#define MC_CMD_PLUGIN_GET_ALL_OUT_LENMAX 240
+#define MC_CMD_PLUGIN_GET_ALL_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_PLUGIN_GET_ALL_OUT_LEN(num) (0+20*(num))
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_NUM(len) (((len)-0)/20)
+/* The list of available plugin extensions, as an array of PLUGIN_EXTENSION
+ * structs.
+ */
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_OFST 0
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_LEN 20
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MINNUM 0
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MAXNUM 12
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MAXNUM_MCDI2 51
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_REQ
+ * Send a command to a plugin. A plugin may define an arbitrary number of
+ * 'messages' which it allows applications on the host system to send, each
+ * identified by a 32-bit ID.
+ */
+#define MC_CMD_PLUGIN_REQ 0x1b3
+#define MC_CMD_PLUGIN_REQ_MSGSET 0x1b3
+#undef MC_CMD_0x1b3_PRIVILEGE_CTG
+
+#define MC_CMD_0x1b3_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_REQ_IN msgrequest */
+#define MC_CMD_PLUGIN_REQ_IN_LENMIN 8
+#define MC_CMD_PLUGIN_REQ_IN_LENMAX 252
+#define MC_CMD_PLUGIN_REQ_IN_LENMAX_MCDI2 1020
+#define MC_CMD_PLUGIN_REQ_IN_LEN(num) (8+1*(num))
+#define MC_CMD_PLUGIN_REQ_IN_DATA_NUM(len) (((len)-8)/1)
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_REQ_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_REQ_IN_HANDLE_LEN 4
+/* Message ID defined by the plugin author */
+#define MC_CMD_PLUGIN_REQ_IN_ID_OFST 4
+#define MC_CMD_PLUGIN_REQ_IN_ID_LEN 4
+/* Data blob being the parameter to the message. This must be of the length
+ * specified by MC_CMD_PLUGIN_GET_META_MSG_IN_MCDI_PARAM_SIZE.
+ */
+#define MC_CMD_PLUGIN_REQ_IN_DATA_OFST 8
+#define MC_CMD_PLUGIN_REQ_IN_DATA_LEN 1
+#define MC_CMD_PLUGIN_REQ_IN_DATA_MINNUM 0
+#define MC_CMD_PLUGIN_REQ_IN_DATA_MAXNUM 244
+#define MC_CMD_PLUGIN_REQ_IN_DATA_MAXNUM_MCDI2 1012
+
+/* MC_CMD_PLUGIN_REQ_OUT msgresponse */
+#define MC_CMD_PLUGIN_REQ_OUT_LENMIN 0
+#define MC_CMD_PLUGIN_REQ_OUT_LENMAX 252
+#define MC_CMD_PLUGIN_REQ_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_PLUGIN_REQ_OUT_LEN(num) (0+1*(num))
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_NUM(len) (((len)-0)/1)
+/* The input data, as transformed and/or updated by the plugin's eBPF. Will be
+ * the same size as the input DATA parameter.
+ */
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_OFST 0
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_LEN 1
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_MINNUM 0
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_MAXNUM 252
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_MAXNUM_MCDI2 1020
+
/* DESC_ADDR_REGION structuredef: Describes a contiguous region of DESC_ADDR
* space that maps to a contiguous region of TRGT_ADDR space. Addresses
* DESC_ADDR in the range [DESC_ADDR_BASE:DESC_ADDR_BASE + 1 <<
@@ -27219,6 +27925,38 @@
#define MC_CMD_VIRTIO_TEST_FEATURES_OUT_LEN 0
+/***********************************/
+/* MC_CMD_VIRTIO_GET_CAPABILITIES
+ * Get virtio capabilities supported by the device. Returns general virtio
+ * capabilities and limitations of the hardware / firmware implementation
+ * (hardware device as a whole), rather than that of individual configured
+ * virtio devices. At present, only the absolute maximum number of queues
+ * allowed on multi-queue devices is returned. Response is expected to be
+ * extended as necessary in the future.
+ */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES 0x1d3
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_MSGSET 0x1d3
+#undef MC_CMD_0x1d3_PRIVILEGE_CTG
+
+#define MC_CMD_0x1d3_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_VIRTIO_GET_CAPABILITIES_IN msgrequest */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_IN_LEN 4
+/* Type of device to get capabilities for. Matches the device id as defined by
+ * the virtio spec.
+ */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_IN_DEVICE_ID_OFST 0
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_IN_DEVICE_ID_LEN 4
+/* Enum values, see field(s): */
+/* MC_CMD_VIRTIO_GET_FEATURES/MC_CMD_VIRTIO_GET_FEATURES_IN/DEVICE_ID */
+
+/* MC_CMD_VIRTIO_GET_CAPABILITIES_OUT msgresponse */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_LEN 4
+/* Maximum number of queues supported for a single device instance */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_MAX_QUEUES_OFST 0
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_MAX_QUEUES_LEN 4
+
+
/***********************************/
/* MC_CMD_VIRTIO_INIT_QUEUE
* Create a virtio virtqueue. Fails with EALREADY if the queue already exists.
@@ -27490,6 +28228,24 @@
#define PCIE_FUNCTION_INTF_LBN 32
#define PCIE_FUNCTION_INTF_WIDTH 32
+/* QUEUE_ID structuredef: Structure representing an absolute queue identifier
+ * (absolute VI number + VI relative queue number). On Keystone, a VI can
+ * contain multiple queues (at present, up to 2), each with separate controls
+ * for direction. This structure is required to uniquely identify the absolute
+ * source queue for descriptor proxy functions.
+ */
+#define QUEUE_ID_LEN 4
+/* Absolute VI number */
+#define QUEUE_ID_ABS_VI_OFST 0
+#define QUEUE_ID_ABS_VI_LEN 2
+#define QUEUE_ID_ABS_VI_LBN 0
+#define QUEUE_ID_ABS_VI_WIDTH 16
+/* Relative queue number within the VI */
+#define QUEUE_ID_REL_QUEUE_LBN 16
+#define QUEUE_ID_REL_QUEUE_WIDTH 1
+#define QUEUE_ID_RESERVED_LBN 17
+#define QUEUE_ID_RESERVED_WIDTH 15
+
/***********************************/
/* MC_CMD_DESC_PROXY_FUNC_CREATE
@@ -28088,7 +28844,11 @@
* Enable descriptor proxying for function into target event queue. Returns VI
* allocation info for the proxy source function, so that the caller can map
* absolute VI IDs from descriptor proxy events back to the originating
- * function.
+ * function. This is a legacy function that only supports single queue proxy
+ * devices. It is also limited in that it can only be called after host driver
+ * attach (once VI allocation is known) and will return MC_CMD_ERR_ENOTCONN
+ * otherwise. For new code, see MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE which
+ * supports multi-queue devices and has no dependency on host driver attach.
*/
#define MC_CMD_DESC_PROXY_FUNC_ENABLE 0x178
#define MC_CMD_DESC_PROXY_FUNC_ENABLE_MSGSET 0x178
@@ -28119,9 +28879,46 @@
#define MC_CMD_DESC_PROXY_FUNC_ENABLE_OUT_VI_BASE_LEN 4
+/***********************************/
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE
+ * Enable descriptor proxying for a source queue on a host function into target
+ * event queue. Source queue number is a relative virtqueue number on the
+ * source function (0 to max_virtqueues-1). For a multi-queue device, the
+ * caller must enable all source queues individually. To retrieve absolute VI
+ * information for the source function (so that VI IDs from descriptor proxy
+ * events can be mapped back to source function / queue) see
+ * MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO
+ */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE 0x1d0
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_MSGSET 0x1d0
+#undef MC_CMD_0x1d0_PRIVILEGE_CTG
+
+#define MC_CMD_0x1d0_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN msgrequest */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_LEN 12
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_HANDLE_OFST 0
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_HANDLE_LEN 4
+/* Source relative queue number to enable proxying on */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_SOURCE_QUEUE_OFST 4
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_SOURCE_QUEUE_LEN 4
+/* Descriptor proxy sink queue (caller function relative). Must be extended
+ * width event queue
+ */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_TARGET_EVQ_OFST 8
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_TARGET_EVQ_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_OUT msgresponse */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_OUT_LEN 0
+
+
/***********************************/
/* MC_CMD_DESC_PROXY_FUNC_DISABLE
- * Disable descriptor proxying for function
+ * Disable descriptor proxying for function. For multi-queue functions,
+ * disables all queues.
*/
#define MC_CMD_DESC_PROXY_FUNC_DISABLE 0x179
#define MC_CMD_DESC_PROXY_FUNC_DISABLE_MSGSET 0x179
@@ -28141,6 +28938,77 @@
#define MC_CMD_DESC_PROXY_FUNC_DISABLE_OUT_LEN 0
+/***********************************/
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE
+ * Disable descriptor proxying for a specific source queue on a function.
+ */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE 0x1d1
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_MSGSET 0x1d1
+#undef MC_CMD_0x1d1_PRIVILEGE_CTG
+
+#define MC_CMD_0x1d1_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN msgrequest */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_LEN 8
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_HANDLE_OFST 0
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_HANDLE_LEN 4
+/* Source relative queue number to disable proxying on */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_SOURCE_QUEUE_OFST 4
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_SOURCE_QUEUE_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_OUT msgresponse */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_OUT_LEN 0
+
+
+/***********************************/
+/* MC_CMD_DESC_PROXY_GET_VI_INFO
+ * Returns absolute VI allocation information for the descriptor proxy source
+ * function referenced by HANDLE, so that the caller can map absolute VI IDs
+ * from descriptor proxy events back to the originating function and queue. The
+ * call is only valid after the host driver for the source function has
+ * attached (after receiving a driver attach event for the descriptor proxy
+ * function) and will fail with ENOTCONN otherwise.
+ */
+#define MC_CMD_DESC_PROXY_GET_VI_INFO 0x1d2
+#define MC_CMD_DESC_PROXY_GET_VI_INFO_MSGSET 0x1d2
+#undef MC_CMD_0x1d2_PRIVILEGE_CTG
+
+#define MC_CMD_0x1d2_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_GET_VI_INFO_IN msgrequest */
+#define MC_CMD_DESC_PROXY_GET_VI_INFO_IN_LEN 4
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define MC_CMD_DESC_PROXY_GET_VI_INFO_IN_HANDLE_OFST 0
+#define MC_CMD_DESC_PROXY_GET_VI_INFO_IN_HANDLE_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT msgresponse */
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMIN 0
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMAX 252
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LEN(num) (0+4*(num))
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_NUM(len) (((len)-0)/4)
+/* VI information (VI ID + VI relative queue number) for each of the source
+ * queues (in order from 0 to max_virtqueues-1), as array of QUEUE_ID
+ * structures.
+ */
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_OFST 0
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_LEN 4
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MINNUM 0
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MAXNUM 63
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MAXNUM_MCDI2 255
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_ABS_VI_OFST 0
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_ABS_VI_LEN 2
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_REL_QUEUE_LBN 16
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_REL_QUEUE_WIDTH 1
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_RESERVED_LBN 17
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_RESERVED_WIDTH 15
+
+
/***********************************/
/* MC_CMD_GET_ADDR_SPC_ID
* Get Address space identifier for use in mem2mem descriptors for a given
@@ -29384,9 +30252,12 @@
#define MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_OFST 4
#define MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_LBN 3
#define MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
-/* The total number of counters available to allocate. */
+/* Deprecated alias for AR_COUNTERS. */
#define MC_CMD_MAE_GET_CAPS_OUT_COUNTERS_OFST 8
#define MC_CMD_MAE_GET_CAPS_OUT_COUNTERS_LEN 4
+/* The total number of AR counters available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_OUT_AR_COUNTERS_OFST 8
+#define MC_CMD_MAE_GET_CAPS_OUT_AR_COUNTERS_LEN 4
/* The total number of counters lists available to allocate. A value of zero
* indicates that counter lists are not supported by the NIC. (But single
* counters may still be.)
@@ -29429,6 +30300,87 @@
#define MC_CMD_MAE_GET_CAPS_OUT_API_VER_OFST 48
#define MC_CMD_MAE_GET_CAPS_OUT_API_VER_LEN 4
+/* MC_CMD_MAE_GET_CAPS_V2_OUT msgresponse */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_LEN 60
+/* The number of field IDs that the NIC supports. Any field with a ID greater
+ * than or equal to the value returned in this field must be treated as having
+ * a support level of MAE_FIELD_UNSUPPORTED in all requests.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_MATCH_FIELD_COUNT_OFST 0
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_MATCH_FIELD_COUNT_LEN 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPES_SUPPORTED_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPES_SUPPORTED_LEN 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_LBN 0
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_WIDTH 1
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_LBN 1
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_WIDTH 1
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_LBN 2
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_WIDTH 1
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_LBN 3
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
+/* Deprecated alias for AR_COUNTERS. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTERS_OFST 8
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTERS_LEN 4
+/* The total number of AR counters available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_AR_COUNTERS_OFST 8
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_AR_COUNTERS_LEN 4
+/* The total number of counters lists available to allocate. A value of zero
+ * indicates that counter lists are not supported by the NIC. (But single
+ * counters may still be.)
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_LISTS_OFST 12
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_LISTS_LEN 4
+/* The total number of encap header structures available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_HEADER_LIMIT_OFST 16
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_HEADER_LIMIT_LEN 4
+/* Reserved. Should be zero. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_RSVD_OFST 20
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_RSVD_LEN 4
+/* The total number of action sets available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SETS_OFST 24
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SETS_LEN 4
+/* The total number of action set lists available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SET_LISTS_OFST 28
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SET_LISTS_LEN 4
+/* The total number of outer rules available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_RULES_OFST 32
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_RULES_LEN 4
+/* The total number of action rules available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_RULES_OFST 36
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_RULES_LEN 4
+/* The number of priorities available for ACTION_RULE filters. It is invalid to
+ * install a MATCH_ACTION filter with a priority number >= ACTION_PRIOS.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_PRIOS_OFST 40
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_PRIOS_LEN 4
+/* The number of priorities available for OUTER_RULE filters. It is invalid to
+ * install an OUTER_RULE filter with a priority number >= OUTER_PRIOS.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_PRIOS_OFST 44
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_PRIOS_LEN 4
+/* MAE API major version. Currently 1. If this field is not present in the
+ * response (i.e. response shorter than 384 bits), then its value is zero. If
+ * the value does not match the client's expectations, the client should raise
+ * a fatal error.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_API_VER_OFST 48
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_API_VER_LEN 4
+/* Mask of supported counter types. Each bit position corresponds to a value of
+ * the MAE_COUNTER_TYPE enum. If this field is missing (i.e. V1 response),
+ * clients must assume that only AR counters are supported (i.e.
+ * COUNTER_TYPES_SUPPORTED==0x1). See also
+ * MC_CMD_MAE_COUNTERS_STREAM_START/COUNTER_TYPES_MASK.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_TYPES_SUPPORTED_OFST 52
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_TYPES_SUPPORTED_LEN 4
+/* The total number of conntrack counters available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_CT_COUNTERS_OFST 56
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_CT_COUNTERS_LEN 4
+
/***********************************/
/* MC_CMD_MAE_GET_AR_CAPS
@@ -29495,8 +30447,8 @@
/***********************************/
/* MC_CMD_MAE_COUNTER_ALLOC
- * Allocate match-action-engine counters, which can be referenced in Action
- * Rules.
+ * Allocate match-action-engine counters, which can be referenced in various
+ * tables.
*/
#define MC_CMD_MAE_COUNTER_ALLOC 0x143
#define MC_CMD_MAE_COUNTER_ALLOC_MSGSET 0x143
@@ -29504,12 +30456,25 @@
#define MC_CMD_0x143_PRIVILEGE_CTG SRIOV_CTG_MAE
-/* MC_CMD_MAE_COUNTER_ALLOC_IN msgrequest */
+/* MC_CMD_MAE_COUNTER_ALLOC_IN msgrequest: Using this is equivalent to using V2
+ * with COUNTER_TYPE=AR.
+ */
#define MC_CMD_MAE_COUNTER_ALLOC_IN_LEN 4
/* The number of counters that the driver would like allocated */
#define MC_CMD_MAE_COUNTER_ALLOC_IN_REQUESTED_COUNT_OFST 0
#define MC_CMD_MAE_COUNTER_ALLOC_IN_REQUESTED_COUNT_LEN 4
+/* MC_CMD_MAE_COUNTER_ALLOC_V2_IN msgrequest */
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_LEN 8
+/* The number of counters that the driver would like allocated */
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_REQUESTED_COUNT_OFST 0
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_REQUESTED_COUNT_LEN 4
+/* Which type of counter to allocate. */
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_COUNTER_TYPE_OFST 4
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_COUNTER_TYPE_LEN 4
+/* Enum values, see field(s): */
+/* MAE_COUNTER_TYPE */
+
/* MC_CMD_MAE_COUNTER_ALLOC_OUT msgresponse */
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMIN 12
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMAX 252
@@ -29518,7 +30483,8 @@
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_NUM(len) (((len)-8)/4)
/* Generation count. Packets with generation count >= GENERATION_COUNT will
* contain valid counter values for counter IDs allocated in this call, unless
- * the counter values are zero and zero squash is enabled.
+ * the counter values are zero and zero squash is enabled. Note that there is
+ * an independent GENERATION_COUNT object per counter type.
*/
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_GENERATION_COUNT_OFST 0
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_GENERATION_COUNT_LEN 4
@@ -29548,7 +30514,9 @@
#define MC_CMD_0x144_PRIVILEGE_CTG SRIOV_CTG_MAE
-/* MC_CMD_MAE_COUNTER_FREE_IN msgrequest */
+/* MC_CMD_MAE_COUNTER_FREE_IN msgrequest: Using this is equivalent to using V2
+ * with COUNTER_TYPE=AR.
+ */
#define MC_CMD_MAE_COUNTER_FREE_IN_LENMIN 8
#define MC_CMD_MAE_COUNTER_FREE_IN_LENMAX 132
#define MC_CMD_MAE_COUNTER_FREE_IN_LENMAX_MCDI2 132
@@ -29564,6 +30532,23 @@
#define MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MAXNUM 32
#define MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MAXNUM_MCDI2 32
+/* MC_CMD_MAE_COUNTER_FREE_V2_IN msgrequest */
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_LEN 136
+/* The number of counter IDs to be freed. */
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_ID_COUNT_OFST 0
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_ID_COUNT_LEN 4
+/* An array containing the counter IDs to be freed. */
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_OFST 4
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_LEN 4
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MINNUM 1
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MAXNUM 32
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MAXNUM_MCDI2 32
+/* Which type of counter to free. */
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_TYPE_OFST 132
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_TYPE_LEN 4
+/* Enum values, see field(s): */
+/* MAE_COUNTER_TYPE */
+
/* MC_CMD_MAE_COUNTER_FREE_OUT msgresponse */
#define MC_CMD_MAE_COUNTER_FREE_OUT_LENMIN 12
#define MC_CMD_MAE_COUNTER_FREE_OUT_LENMAX 136
@@ -29572,11 +30557,13 @@
#define MC_CMD_MAE_COUNTER_FREE_OUT_FREED_COUNTER_ID_NUM(len) (((len)-8)/4)
/* Generation count. A packet with generation count == GENERATION_COUNT will
* contain the final values for these counter IDs, unless the counter values
- * are zero and zero squash is enabled. Receiving a packet with generation
- * count > GENERATION_COUNT guarantees that no more values will be written for
- * these counters. If values for these counter IDs are present, the counter ID
- * has been reallocated. A counter ID will not be reallocated within a single
- * read cycle as this would merge increments from the 'old' and 'new' counters.
+ * are zero and zero squash is enabled. Note that the GENERATION_COUNT value is
+ * specific to the COUNTER_TYPE (IDENTIFIER field in packet header). Receiving
+ * a packet with generation count > GENERATION_COUNT guarantees that no more
+ * values will be written for these counters. If values for these counter IDs
+ * are present, the counter ID has been reallocated. A counter ID will not be
+ * reallocated within a single read cycle as this would merge increments from
+ * the 'old' and 'new' counters.
*/
#define MC_CMD_MAE_COUNTER_FREE_OUT_GENERATION_COUNT_OFST 0
#define MC_CMD_MAE_COUNTER_FREE_OUT_GENERATION_COUNT_LEN 4
@@ -29616,7 +30603,9 @@
#define MC_CMD_0x151_PRIVILEGE_CTG SRIOV_CTG_MAE
-/* MC_CMD_MAE_COUNTERS_STREAM_START_IN msgrequest */
+/* MC_CMD_MAE_COUNTERS_STREAM_START_IN msgrequest: Using V1 is equivalent to V2
+ * with COUNTER_TYPES_MASK=0x1 (i.e. AR counters only).
+ */
#define MC_CMD_MAE_COUNTERS_STREAM_START_IN_LEN 8
/* The RxQ to write packets to. */
#define MC_CMD_MAE_COUNTERS_STREAM_START_IN_QID_OFST 0
@@ -29634,6 +30623,35 @@
#define MC_CMD_MAE_COUNTERS_STREAM_START_IN_COUNTER_STALL_EN_LBN 1
#define MC_CMD_MAE_COUNTERS_STREAM_START_IN_COUNTER_STALL_EN_WIDTH 1
+/* MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN msgrequest */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_LEN 12
+/* The RxQ to write packets to. */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_QID_OFST 0
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_QID_LEN 2
+/* Maximum size in bytes of packets that may be written to the RxQ. */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_PACKET_SIZE_OFST 2
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_PACKET_SIZE_LEN 2
+/* Optional flags. */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_FLAGS_OFST 4
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_FLAGS_LEN 4
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_OFST 4
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_LBN 0
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_WIDTH 1
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_OFST 4
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_LBN 1
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_WIDTH 1
+/* Mask of which counter types should be reported. Each bit position
+ * corresponds to a value of the MAE_COUNTER_TYPE enum. For example a value of
+ * 0x3 requests both AR and CT counters. A value of zero is invalid. Counter
+ * types not selected by the mask value won't be included in the stream. If a
+ * client wishes to change which counter types are reported, it must first call
+ * MAE_COUNTERS_STREAM_STOP, then restart it with the new mask value.
+ * Requesting a counter type which isn't supported by firmware (reported in
+ * MC_CMD_MAE_GET_CAPS/COUNTER_TYPES_SUPPORTED) will result in ENOTSUP.
+ */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_TYPES_MASK_OFST 8
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_TYPES_MASK_LEN 4
+
/* MC_CMD_MAE_COUNTERS_STREAM_START_OUT msgresponse */
#define MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN 4
#define MC_CMD_MAE_COUNTERS_STREAM_START_OUT_FLAGS_OFST 0
@@ -29661,14 +30679,32 @@
/* MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT msgresponse */
#define MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN 4
-/* Generation count. The final set of counter values will be written out in
- * packets with count == GENERATION_COUNT. An empty packet with count >
- * GENERATION_COUNT indicates that no more counter values will be written to
- * this stream.
+/* Generation count for AR counters. The final set of AR counter values will be
+ * written out in packets with count == GENERATION_COUNT. An empty packet with
+ * count > GENERATION_COUNT indicates that no more counter values of this type
+ * will be written to this stream.
*/
#define MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_GENERATION_COUNT_OFST 0
#define MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_GENERATION_COUNT_LEN 4
+/* MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT msgresponse */
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMIN 4
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMAX 32
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMAX_MCDI2 32
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LEN(num) (0+4*(num))
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_NUM(len) (((len)-0)/4)
+/* Array of generation counts, indexed by MAE_COUNTER_TYPE. Note that since
+ * MAE_COUNTER_TYPE_AR==0, this response is backwards-compatible with V1. The
+ * final set of counter values will be written out in packets with count ==
+ * GENERATION_COUNT. An empty packet with count > GENERATION_COUNT indicates
+ * that no more counter values of this type will be written to this stream.
+ */
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_OFST 0
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_LEN 4
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MINNUM 1
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MAXNUM 8
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MAXNUM_MCDI2 8
+
/***********************************/
/* MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS
@@ -29941,9 +30977,10 @@
#define MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_LIST_ID_LEN 4
/* If a driver only wished to update one counter within this action set, then
* it can supply a COUNTER_ID instead of allocating a single-element counter
- * list. This field should be set to COUNTER_ID_NULL if this behaviour is not
- * required. It is not valid to supply a non-NULL value for both
- * COUNTER_LIST_ID and COUNTER_ID.
+ * list. The ID must have been allocated with COUNTER_TYPE=AR. This field
+ * should be set to COUNTER_ID_NULL if this behaviour is not required. It is
+ * not valid to supply a non-NULL value for both COUNTER_LIST_ID and
+ * COUNTER_ID.
*/
#define MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_ID_OFST 28
#define MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_ID_LEN 4
@@ -30021,9 +31058,10 @@
#define MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_LIST_ID_LEN 4
/* If a driver only wished to update one counter within this action set, then
* it can supply a COUNTER_ID instead of allocating a single-element counter
- * list. This field should be set to COUNTER_ID_NULL if this behaviour is not
- * required. It is not valid to supply a non-NULL value for both
- * COUNTER_LIST_ID and COUNTER_ID.
+ * list. The ID must have been allocated with COUNTER_TYPE=AR. This field
+ * should be set to COUNTER_ID_NULL if this behaviour is not required. It is
+ * not valid to supply a non-NULL value for both COUNTER_LIST_ID and
+ * COUNTER_ID.
*/
#define MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_ID_OFST 28
#define MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_ID_LEN 4
@@ -30352,7 +31390,8 @@
#define MAE_ACTION_RULE_RESPONSE_LOOKUP_CONTROL_LBN 64
#define MAE_ACTION_RULE_RESPONSE_LOOKUP_CONTROL_WIDTH 32
/* Counter ID to increment if DO_CT or DO_RECIRC is set. Must be set to
- * COUNTER_ID_NULL otherwise.
+ * COUNTER_ID_NULL otherwise. Counter ID must have been allocated with
+ * COUNTER_TYPE=AR.
*/
#define MAE_ACTION_RULE_RESPONSE_COUNTER_ID_OFST 12
#define MAE_ACTION_RULE_RESPONSE_COUNTER_ID_LEN 4
@@ -30710,6 +31749,108 @@
#define MAE_MPORT_DESC_VNIC_PLUGIN_TBD_LBN 352
#define MAE_MPORT_DESC_VNIC_PLUGIN_TBD_WIDTH 32
+/* MAE_MPORT_DESC_V2 structuredef */
+#define MAE_MPORT_DESC_V2_LEN 56
+#define MAE_MPORT_DESC_V2_MPORT_ID_OFST 0
+#define MAE_MPORT_DESC_V2_MPORT_ID_LEN 4
+#define MAE_MPORT_DESC_V2_MPORT_ID_LBN 0
+#define MAE_MPORT_DESC_V2_MPORT_ID_WIDTH 32
+/* Reserved for future purposes, contains information independent of caller */
+#define MAE_MPORT_DESC_V2_FLAGS_OFST 4
+#define MAE_MPORT_DESC_V2_FLAGS_LEN 4
+#define MAE_MPORT_DESC_V2_FLAGS_LBN 32
+#define MAE_MPORT_DESC_V2_FLAGS_WIDTH 32
+#define MAE_MPORT_DESC_V2_CALLER_FLAGS_OFST 8
+#define MAE_MPORT_DESC_V2_CALLER_FLAGS_LEN 4
+#define MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_OFST 8
+#define MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_LBN 0
+#define MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_WIDTH 1
+#define MAE_MPORT_DESC_V2_CAN_DELIVER_TO_OFST 8
+#define MAE_MPORT_DESC_V2_CAN_DELIVER_TO_LBN 1
+#define MAE_MPORT_DESC_V2_CAN_DELIVER_TO_WIDTH 1
+#define MAE_MPORT_DESC_V2_CAN_DELETE_OFST 8
+#define MAE_MPORT_DESC_V2_CAN_DELETE_LBN 2
+#define MAE_MPORT_DESC_V2_CAN_DELETE_WIDTH 1
+#define MAE_MPORT_DESC_V2_IS_ZOMBIE_OFST 8
+#define MAE_MPORT_DESC_V2_IS_ZOMBIE_LBN 3
+#define MAE_MPORT_DESC_V2_IS_ZOMBIE_WIDTH 1
+#define MAE_MPORT_DESC_V2_CALLER_FLAGS_LBN 64
+#define MAE_MPORT_DESC_V2_CALLER_FLAGS_WIDTH 32
+/* Not the ideal name; it's really the type of thing connected to the m-port */
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_OFST 12
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_LEN 4
+/* enum: Connected to a MAC... */
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_NET_PORT 0x0
+/* enum: Adds metadata and delivers to another m-port */
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_ALIAS 0x1
+/* enum: Connected to a VNIC. */
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_VNIC 0x2
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_LBN 96
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_WIDTH 32
+/* 128-bit value available to drivers for m-port identification. */
+#define MAE_MPORT_DESC_V2_UUID_OFST 16
+#define MAE_MPORT_DESC_V2_UUID_LEN 16
+#define MAE_MPORT_DESC_V2_UUID_LBN 128
+#define MAE_MPORT_DESC_V2_UUID_WIDTH 128
+/* Big wadge of space reserved for other common properties */
+#define MAE_MPORT_DESC_V2_RESERVED_OFST 32
+#define MAE_MPORT_DESC_V2_RESERVED_LEN 8
+#define MAE_MPORT_DESC_V2_RESERVED_LO_OFST 32
+#define MAE_MPORT_DESC_V2_RESERVED_LO_LEN 4
+#define MAE_MPORT_DESC_V2_RESERVED_LO_LBN 256
+#define MAE_MPORT_DESC_V2_RESERVED_LO_WIDTH 32
+#define MAE_MPORT_DESC_V2_RESERVED_HI_OFST 36
+#define MAE_MPORT_DESC_V2_RESERVED_HI_LEN 4
+#define MAE_MPORT_DESC_V2_RESERVED_HI_LBN 288
+#define MAE_MPORT_DESC_V2_RESERVED_HI_WIDTH 32
+#define MAE_MPORT_DESC_V2_RESERVED_LBN 256
+#define MAE_MPORT_DESC_V2_RESERVED_WIDTH 64
+/* Logical port index. Only valid when type NET Port. */
+#define MAE_MPORT_DESC_V2_NET_PORT_IDX_OFST 40
+#define MAE_MPORT_DESC_V2_NET_PORT_IDX_LEN 4
+#define MAE_MPORT_DESC_V2_NET_PORT_IDX_LBN 320
+#define MAE_MPORT_DESC_V2_NET_PORT_IDX_WIDTH 32
+/* The m-port delivered to */
+#define MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_OFST 40
+#define MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_LEN 4
+#define MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_LBN 320
+#define MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_WIDTH 32
+/* The type of thing that owns the VNIC */
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_OFST 40
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_LEN 4
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_FUNCTION 0x1 /* enum */
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_PLUGIN 0x2 /* enum */
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_LBN 320
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_WIDTH 32
+/* The PCIe interface on which the function lives. CJK: We need an enumeration
+ * of interfaces that we extend as new interface (types) appear. This belongs
+ * elsewhere and should be referenced from here
+ */
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_OFST 44
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_LEN 4
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_LBN 352
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_WIDTH 32
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_OFST 48
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_LEN 2
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_LBN 384
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_WIDTH 16
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_OFST 50
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_LEN 2
+/* enum: Indicates that the function is a PF */
+#define MAE_MPORT_DESC_V2_VF_IDX_NULL 0xffff
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_LBN 400
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_WIDTH 16
+/* Reserved. Should be ignored for now. */
+#define MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_OFST 44
+#define MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_LEN 4
+#define MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_LBN 352
+#define MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_WIDTH 32
+/* A client handle for the VNIC's owner. Only valid for type VNIC. */
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_OFST 52
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_LEN 4
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_LBN 416
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_WIDTH 32
+
/***********************************/
/* MC_CMD_MAE_MPORT_ENUMERATE
--
2.30.2
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [Bug 797] [dpdk-21.11]Segmentation fault when start txonly packet forward after set txpkts=40, 64 and txsplit=rand
@ 2021-08-27 1:47 4% bugzilla
0 siblings, 0 replies; 200+ results
From: bugzilla @ 2021-08-27 1:47 UTC (permalink / raw)
To: dev
https://bugs.dpdk.org/show_bug.cgi?id=797
Bug ID: 797
Summary: [dpdk-21.11]Segmentation fault when start txonly
packet forward after set txpkts=40,64 and txsplit=rand
Product: DPDK
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: Normal
Component: testpmd
Assignee: dev@dpdk.org
Reporter: yux.jiang@intel.com
Target Milestone: ---
Environment
DPDK version:
commit fdab8f2e17493192d555cd88cf28b06269174326 (HEAD, origin/main)
Author: Thomas Monjalon <thomas@monjalon.net>
Date: Sun Aug 8 21:26:58 2021 +0200 version: 21.11-rc0 Start a new
release cycle with empty release notes. The ABI version becomes 22.0.
The map files are updated to the new ABI major number (22).
The ABI exceptions are dropped and CI ABI checks are disabled because
compatibility is not preserved. Signed-off-by: Thomas Monjalon
<thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Other software versions:
OS: PRETTY_NAME="Fedora 34 (Server Edition)"/5.12.14-300.fc34.x86_64
Compiler: gcc version 11.1.1 20210531 (Red Hat 11.1.1-3)
Hardware platform: Intel(R) Xeon(R) Gold 6252N CPU @ 2.30GHz
NIC hardware: Ethernet Controller X710 for 10GbE SFP+ 1572
NIC firmware:
[root@fedora dpdk]# ethtool -i enp217s0f0
driver: i40e
version: 2.15.9
firmware-version: 8.30 0x8000a49d 1.2926.0
Test Setup
# Build dpdk
rm -rf x86_64-native-linuxapp-gccCC=gcc meson --werror -Denable_kmods=True
-Dbuildtype=debug -Dlibdir=lib --default-library=static
x86_64-native-linuxapp-gccninja -C x86_64-native-linuxapp-gcc -j 40
# bind nic to vfio-pci and start testpmd
./usertools/dpdk-devbind.py -b vfio-pci d9:00.0 d9:00.1
x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1,2,3,4 -n 4
--file-prefix=dpdk_339481_20210823205202 -- -i
# Set txpkts and txsplit and start
testpmd> set fwd txonly
Set txonly packet forwarding mode
testpmd> set txpkts 40,64
testpmd> set txsplit rand
testpmd> start
testpmd>
Show the output from the previous commands.
testpmd> start
txonly packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support
enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 2 streams:
RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00 txonly
packet forwarding packets/burst=32
packet len=104 - nb packet segments=2
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
testpmd> Segmentation fault (core dumped)
[root@fedora dpdk]#
Expected Result
No any core dumped
Regression
Is this issue a regression: (Y/N) N
First test with such cmd
Stack Trace or Log
# if set txsplit on, there is no core dumped.
# gdb ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd
./core-lcore-worker-2-46341-1629784290
warning: Unable to find libthread_db matching inferior's thread library, thread
debugging will not be available.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1,2,3,4
-n 4 --file-prefix=dpdk_'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00000000005fbbc6 in copy_buf_to_pkt_segs (buf=0x3de3e3e <pkt_udp_hdr+6>,
len=2, pkt=0x166c5e400, offset=34) at ../app/test-pmd/txonly.c:86
86 seg_buf = rte_pktmbuf_mtod(seg, char *);
[Current thread is 1 (Thread 0x7f47712a0400 (LWP 46344))]
Missing separate debuginfos, use: dnf debuginfo-install
elfutils-libelf-0.185-2.fc34.x86_64 glibc-2.33-5.fc34.x86_64
jansson-2.13.1-2.fc34.x86_64 libfdt-1.6.1-1.fc34.x86_64
libibverbs-34.0-3.fc34.x86_64 libnl3-3.5.0-6.fc34.x86_64
libpcap-1.10.1-1.fc34.x86_64 numactl-libs-2.0.14-3.fc34.x86_64
openssl-libs-1.1.1k-1.fc34.x86_64 zlib-1.2.11-26.fc34.x86_64
(gdb) bt
#0 0x00000000005fbbc6 in copy_buf_to_pkt_segs (buf=0x3de3e3e <pkt_udp_hdr+6>,
len=2, pkt=0x166c5e400, offset=34) at ../app/test-pmd/txonly.c:86
#1 0x00000000005fe51c in copy_buf_to_pkt (buf=0x3de3e38 <pkt_udp_hdr>, len=8,
pkt=0x166c5e400, offset=34) at ../app/test-pmd/txonly.c:100
#2 0x00000000005ff51b in pkt_burst_prepare (pkt=0x166c5e400, mbp=0x17f0f0180,
eth_hdr=0x7f477129b182, vlan_tci=0, vlan_tci_outer=0, ol_flags=0, idx=1,
fs=0x166bbb4c0) at ../app/test-pmd/txonly.c:251
#3 0x000000000060043f in pkt_burst_transmit (fs=0x166bbb4c0) at
../app/test-pmd/txonly.c:372
#4 0x00000000005f0362 in run_pkt_fwd_on_lcore (fc=0x17fb3adc0,
pkt_fwd=0x5ff7eb <pkt_burst_transmit>) at ../app/test-pmd/testpmd.c:2049
#5 0x00000000005f0452 in start_pkt_forward_on_core (fwd_arg=0x17fb3adc0) at
../app/test-pmd/testpmd.c:2075
#6 0x0000000000aa8910 in eal_thread_loop (arg=0x0) at
../lib/eal/linux/eal_thread.c:127
#7 0x00007f477254f299 in start_thread () from /lib64/libpthread.so.0
#8 0x00007f47724776a3 in clone () from /lib64/libc.so.6
(gdb)
--
You are receiving this mail because:
You are the assignee for the bug.
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2] ethdev: add namespace
@ 2021-08-27 1:19 1% ` Ferruh Yigit
2021-08-30 17:19 1% ` [dpdk-dev] [PATCH v3] " Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-08-27 1:19 UTC (permalink / raw)
To: Maryam Tahhan, Reshma Pattan, Jerin Jacob, Wisam Jaddo,
Cristian Dumitrescu, Xiaoyun Li, Thomas Monjalon,
Andrew Rybchenko, Jay Jayatheerthan, Chas Williams,
Min Hu (Connor),
Pavan Nikhilesh, Shijith Thotton, Ajit Khaparde, Somnath Kotur,
John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, Beilei Xing,
Haiyue Wang, Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko,
Keith Wiles, Jiayu Hu, Olivier Matz, Ori Kam, Akhil Goyal,
Declan Doherty, Ray Kinsella, Radu Nicolau, Hemant Agrawal,
Sachin Saxena, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, John W. Linville, Ciara Loftus,
Shepard Siegel, Ed Czeck, John Miller, Igor Russkikh,
Steven Webster, Matt Peters, Somalapuram Amaranath, Rasesh Mody,
Shahed Shaikh, Bruce Richardson, Konstantin Ananyev,
Ruifeng Wang, Rahul Lakkireddy, Marcin Wojtas, Michal Krawczyk,
Shai Brandes, Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh,
Gaetan Rivet, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Jingjing Wu, Qiming Yang, Andrew Boyer,
Rosen Xu, Srisivasubramanian Srinivasan, Jakub Grajciar,
Zyta Szpak, Liron Himi, Stephen Hemminger, Long Li,
Martin Spinler, Heinrich Kuhn, Jiawen Wu, Tetsuya Mukawa,
Harman Kalra, Anoob Joseph, Nalla Pradeep,
Radha Mohan Chintakuntla, Veerasenareddy Burru,
Devendra Singh Rawat, Jasvinder Singh, Maciej Czekaj, Jian Wang,
Maxime Coquelin, Chenbo Xia, Yong Wang, Nicolas Chautru,
David Hunt, Harry van Haaren, Bernard Iremonger, Anatoly Burakov,
John McNamara, Kirill Rybalchenko, Byron Marohn, Yipeng Wang
Cc: Ferruh Yigit, dev, Tyler Retzlaff
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Internal components switched to new enum & macro names.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-By: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
v2:
* Updated internal components
* Removed deprecation notice
---
app/proc-info/main.c | 8 +-
app/test-eventdev/test_perf_common.c | 4 +-
app/test-eventdev/test_pipeline_common.c | 12 +-
app/test-flow-perf/config.h | 2 +-
app/test-pipeline/init.c | 8 +-
app/test-pmd/cmdline.c | 290 +++---
app/test-pmd/config.c | 198 ++--
app/test-pmd/csumonly.c | 28 +-
app/test-pmd/flowgen.c | 6 +-
app/test-pmd/macfwd.c | 6 +-
app/test-pmd/macswap_common.h | 6 +-
app/test-pmd/parameters.c | 54 +-
app/test-pmd/testpmd.c | 60 +-
app/test-pmd/testpmd.h | 2 +-
app/test-pmd/txonly.c | 6 +-
app/test/test_ethdev_link.c | 68 +-
app/test/test_event_eth_rx_adapter.c | 4 +-
app/test/test_kni.c | 2 +-
app/test/test_link_bonding.c | 4 +-
app/test/test_link_bonding_mode4.c | 4 +-
| 10 +-
app/test/test_pmd_perf.c | 12 +-
app/test/virtual_pmd.c | 10 +-
doc/guides/eventdevs/cnxk.rst | 2 +-
doc/guides/eventdevs/octeontx2.rst | 2 +-
doc/guides/howto/debug_troubleshoot.rst | 2 +-
doc/guides/nics/bnxt.rst | 26 +-
doc/guides/nics/enic.rst | 2 +-
doc/guides/nics/features.rst | 116 +--
doc/guides/nics/fm10k.rst | 6 +-
doc/guides/nics/intel_vf.rst | 10 +-
doc/guides/nics/ixgbe.rst | 12 +-
doc/guides/nics/mlx5.rst | 4 +-
doc/guides/nics/tap.rst | 2 +-
.../generic_segmentation_offload_lib.rst | 8 +-
doc/guides/prog_guide/mbuf_lib.rst | 18 +-
doc/guides/prog_guide/poll_mode_drv.rst | 8 +-
doc/guides/prog_guide/rte_flow.rst | 34 +-
doc/guides/prog_guide/rte_security.rst | 2 +-
doc/guides/rel_notes/deprecation.rst | 12 +-
doc/guides/sample_app_ug/ipsec_secgw.rst | 4 +-
doc/guides/testpmd_app_ug/run_app.rst | 2 +-
drivers/bus/dpaa/include/process.h | 16 +-
drivers/common/cnxk/roc_npc.h | 2 +-
drivers/net/af_packet/rte_eth_af_packet.c | 16 +-
drivers/net/af_xdp/rte_eth_af_xdp.c | 12 +-
drivers/net/ark/ark_ethdev.c | 16 +-
drivers/net/atlantic/atl_ethdev.c | 90 +-
drivers/net/atlantic/atl_ethdev.h | 18 +-
drivers/net/atlantic/atl_rxtx.c | 6 +-
drivers/net/avp/avp_ethdev.c | 26 +-
drivers/net/axgbe/axgbe_dev.c | 6 +-
drivers/net/axgbe/axgbe_ethdev.c | 102 +-
drivers/net/axgbe/axgbe_ethdev.h | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 2 +-
drivers/net/axgbe/axgbe_rxtx.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 16 +-
drivers/net/bnxt/bnxt.h | 68 +-
drivers/net/bnxt/bnxt_ethdev.c | 170 ++--
drivers/net/bnxt/bnxt_flow.c | 4 +-
drivers/net/bnxt/bnxt_hwrm.c | 112 +--
drivers/net/bnxt/bnxt_reps.c | 2 +-
drivers/net/bnxt/bnxt_ring.c | 4 +-
drivers/net/bnxt/bnxt_rxq.c | 28 +-
drivers/net/bnxt/bnxt_rxr.c | 4 +-
drivers/net/bnxt/bnxt_rxtx_vec_avx2.c | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_common.h | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_neon.c | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_sse.c | 2 +-
drivers/net/bnxt/bnxt_txr.c | 4 +-
drivers/net/bnxt/bnxt_vnic.c | 30 +-
drivers/net/bnxt/rte_pmd_bnxt.c | 8 +-
drivers/net/bonding/eth_bond_private.h | 2 +-
drivers/net/bonding/rte_eth_bond_8023ad.c | 16 +-
drivers/net/bonding/rte_eth_bond_api.c | 6 +-
drivers/net/bonding/rte_eth_bond_pmd.c | 42 +-
drivers/net/cnxk/cn10k_ethdev.c | 38 +-
drivers/net/cnxk/cn10k_rx.c | 4 +-
drivers/net/cnxk/cn10k_tx.c | 4 +-
drivers/net/cnxk/cn9k_ethdev.c | 56 +-
drivers/net/cnxk/cn9k_rx.c | 4 +-
drivers/net/cnxk/cn9k_tx.c | 4 +-
drivers/net/cnxk/cnxk_ethdev.c | 84 +-
drivers/net/cnxk/cnxk_ethdev.h | 49 +-
drivers/net/cnxk/cnxk_ethdev_devargs.c | 6 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 104 +-
drivers/net/cnxk/cnxk_link.c | 14 +-
drivers/net/cnxk/cnxk_ptp.c | 4 +-
drivers/net/cnxk/cnxk_rte_flow.c | 2 +-
drivers/net/cxgbe/cxgbe.h | 48 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 42 +-
drivers/net/cxgbe/cxgbe_main.c | 12 +-
drivers/net/cxgbe/sge.c | 2 +-
drivers/net/dpaa/dpaa_ethdev.c | 190 ++--
drivers/net/dpaa/dpaa_ethdev.h | 10 +-
drivers/net/dpaa/dpaa_flow.c | 32 +-
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 34 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 148 +--
drivers/net/dpaa2/dpaa2_ethdev.h | 12 +-
drivers/net/dpaa2/dpaa2_rxtx.c | 8 +-
drivers/net/e1000/e1000_ethdev.h | 18 +-
drivers/net/e1000/em_ethdev.c | 68 +-
drivers/net/e1000/em_rxtx.c | 48 +-
drivers/net/e1000/igb_ethdev.c | 158 +--
drivers/net/e1000/igb_pf.c | 2 +-
drivers/net/e1000/igb_rxtx.c | 116 +--
drivers/net/ena/ena_ethdev.c | 70 +-
drivers/net/ena/ena_ethdev.h | 4 +-
| 66 +-
drivers/net/enetc/enetc_ethdev.c | 38 +-
drivers/net/enic/enic_ethdev.c | 80 +-
drivers/net/enic/enic_main.c | 40 +-
drivers/net/enic/enic_res.c | 52 +-
drivers/net/failsafe/failsafe.c | 8 +-
drivers/net/failsafe/failsafe_intr.c | 4 +-
drivers/net/failsafe/failsafe_ops.c | 82 +-
drivers/net/fm10k/fm10k.h | 4 +-
drivers/net/fm10k/fm10k_ethdev.c | 140 +--
drivers/net/fm10k/fm10k_rxtx_vec.c | 6 +-
drivers/net/hinic/base/hinic_pmd_hwdev.c | 22 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 134 +--
drivers/net/hinic/hinic_pmd_rx.c | 36 +-
drivers/net/hinic/hinic_pmd_rx.h | 22 +-
drivers/net/hns3/hns3_dcb.c | 14 +-
drivers/net/hns3/hns3_ethdev.c | 360 +++----
drivers/net/hns3/hns3_ethdev.h | 12 +-
drivers/net/hns3/hns3_ethdev_vf.c | 108 +--
drivers/net/hns3/hns3_flow.c | 6 +-
drivers/net/hns3/hns3_ptp.c | 2 +-
| 100 +-
| 28 +-
drivers/net/hns3/hns3_rxtx.c | 30 +-
drivers/net/hns3/hns3_rxtx.h | 2 +-
drivers/net/hns3/hns3_rxtx_vec.c | 10 +-
drivers/net/i40e/i40e_ethdev.c | 270 +++---
drivers/net/i40e/i40e_ethdev.h | 24 +-
drivers/net/i40e/i40e_ethdev_vf.c | 110 +--
drivers/net/i40e/i40e_flow.c | 2 +-
drivers/net/i40e/i40e_hash.c | 156 +--
drivers/net/i40e/i40e_pf.c | 14 +-
drivers/net/i40e/i40e_rxtx.c | 10 +-
drivers/net/i40e/i40e_rxtx.h | 4 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 2 +-
drivers/net/i40e/i40e_rxtx_vec_common.h | 8 +-
drivers/net/i40e/i40e_vf_representor.c | 48 +-
drivers/net/iavf/iavf.h | 24 +-
drivers/net/iavf/iavf_ethdev.c | 178 ++--
drivers/net/iavf/iavf_hash.c | 300 +++---
drivers/net/iavf/iavf_rxtx.c | 2 +-
drivers/net/iavf/iavf_rxtx.h | 24 +-
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 4 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 6 +-
drivers/net/iavf/iavf_rxtx_vec_sse.c | 2 +-
drivers/net/ice/ice_dcf.c | 2 +-
drivers/net/ice/ice_dcf_ethdev.c | 90 +-
drivers/net/ice/ice_dcf_vf_representor.c | 58 +-
drivers/net/ice/ice_ethdev.c | 182 ++--
drivers/net/ice/ice_ethdev.h | 26 +-
drivers/net/ice/ice_hash.c | 268 +++---
drivers/net/ice/ice_rxtx.c | 8 +-
drivers/net/ice/ice_rxtx_vec_avx2.c | 2 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 4 +-
drivers/net/ice/ice_rxtx_vec_common.h | 26 +-
drivers/net/ice/ice_rxtx_vec_sse.c | 2 +-
drivers/net/igc/igc_ethdev.c | 134 +--
drivers/net/igc/igc_ethdev.h | 56 +-
drivers/net/igc/igc_txrx.c | 50 +-
drivers/net/ionic/ionic_ethdev.c | 128 +--
drivers/net/ionic/ionic_ethdev.h | 12 +-
drivers/net/ionic/ionic_lif.c | 36 +-
drivers/net/ionic/ionic_rxtx.c | 10 +-
drivers/net/ipn3ke/ipn3ke_representor.c | 70 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 302 +++---
drivers/net/ixgbe/ixgbe_ethdev.h | 18 +-
drivers/net/ixgbe/ixgbe_fdir.c | 24 +-
drivers/net/ixgbe/ixgbe_flow.c | 2 +-
drivers/net/ixgbe/ixgbe_ipsec.c | 12 +-
drivers/net/ixgbe/ixgbe_pf.c | 38 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 252 ++---
drivers/net/ixgbe/ixgbe_rxtx.h | 4 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 2 +-
drivers/net/ixgbe/ixgbe_tm.c | 16 +-
drivers/net/ixgbe/ixgbe_vf_representor.c | 16 +-
drivers/net/ixgbe/rte_pmd_ixgbe.c | 14 +-
drivers/net/ixgbe/rte_pmd_ixgbe.h | 4 +-
drivers/net/kni/rte_eth_kni.c | 8 +-
drivers/net/liquidio/lio_ethdev.c | 102 +-
drivers/net/memif/memif_socket.c | 2 +-
drivers/net/memif/rte_eth_memif.c | 14 +-
drivers/net/mlx4/mlx4_ethdev.c | 32 +-
drivers/net/mlx4/mlx4_flow.c | 30 +-
drivers/net/mlx4/mlx4_intr.c | 8 +-
drivers/net/mlx4/mlx4_rxq.c | 20 +-
drivers/net/mlx4/mlx4_txq.c | 24 +-
drivers/net/mlx5/linux/mlx5_ethdev_os.c | 54 +-
drivers/net/mlx5/linux/mlx5_os.c | 6 +-
drivers/net/mlx5/mlx5.c | 4 +-
drivers/net/mlx5/mlx5.h | 2 +-
drivers/net/mlx5/mlx5_defs.h | 6 +-
drivers/net/mlx5/mlx5_ethdev.c | 6 +-
drivers/net/mlx5/mlx5_flow.c | 54 +-
drivers/net/mlx5/mlx5_flow.h | 12 +-
drivers/net/mlx5/mlx5_flow_dv.c | 44 +-
drivers/net/mlx5/mlx5_flow_verbs.c | 4 +-
| 2 +-
drivers/net/mlx5/mlx5_rxq.c | 42 +-
drivers/net/mlx5/mlx5_rxtx_vec.h | 8 +-
drivers/net/mlx5/mlx5_tx.c | 30 +-
drivers/net/mlx5/mlx5_txq.c | 52 +-
drivers/net/mlx5/mlx5_vlan.c | 4 +-
drivers/net/mlx5/windows/mlx5_os.c | 4 +-
drivers/net/mvneta/mvneta_ethdev.c | 34 +-
drivers/net/mvneta/mvneta_ethdev.h | 12 +-
drivers/net/mvneta/mvneta_rxtx.c | 2 +-
drivers/net/mvpp2/mrvl_ethdev.c | 116 +--
drivers/net/netvsc/hn_ethdev.c | 62 +-
drivers/net/netvsc/hn_rndis.c | 50 +-
drivers/net/nfb/nfb_ethdev.c | 20 +-
drivers/net/nfb/nfb_rx.c | 2 +-
drivers/net/nfp/nfp_common.c | 122 +--
drivers/net/nfp/nfp_ethdev.c | 2 +-
drivers/net/nfp/nfp_ethdev_vf.c | 2 +-
drivers/net/ngbe/ngbe_ethdev.c | 50 +-
drivers/net/null/rte_eth_null.c | 16 +-
drivers/net/octeontx/octeontx_ethdev.c | 78 +-
drivers/net/octeontx/octeontx_ethdev.h | 32 +-
drivers/net/octeontx/octeontx_ethdev_ops.c | 26 +-
drivers/net/octeontx2/otx2_ethdev.c | 96 +-
drivers/net/octeontx2/otx2_ethdev.h | 66 +-
drivers/net/octeontx2/otx2_ethdev_devargs.c | 12 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 18 +-
drivers/net/octeontx2/otx2_ethdev_sec.c | 8 +-
drivers/net/octeontx2/otx2_flow.c | 2 +-
drivers/net/octeontx2/otx2_flow_ctrl.c | 36 +-
drivers/net/octeontx2/otx2_flow_parse.c | 4 +-
drivers/net/octeontx2/otx2_link.c | 40 +-
drivers/net/octeontx2/otx2_mcast.c | 2 +-
drivers/net/octeontx2/otx2_ptp.c | 4 +-
| 62 +-
drivers/net/octeontx2/otx2_rx.c | 4 +-
drivers/net/octeontx2/otx2_tx.c | 2 +-
drivers/net/octeontx2/otx2_vlan.c | 42 +-
drivers/net/octeontx_ep/otx_ep_ethdev.c | 8 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 8 +-
drivers/net/pcap/pcap_ethdev.c | 12 +-
drivers/net/pfe/pfe_ethdev.c | 18 +-
drivers/net/qede/base/mcp_public.h | 4 +-
drivers/net/qede/qede_ethdev.c | 138 +--
drivers/net/qede/qede_filter.c | 10 +-
drivers/net/qede/qede_rxtx.c | 2 +-
drivers/net/qede/qede_rxtx.h | 16 +-
drivers/net/ring/rte_eth_ring.c | 20 +-
drivers/net/sfc/sfc.c | 30 +-
drivers/net/sfc/sfc_ef100_rx.c | 10 +-
drivers/net/sfc/sfc_ef100_tx.c | 20 +-
drivers/net/sfc/sfc_ef10_essb_rx.c | 4 +-
drivers/net/sfc/sfc_ef10_rx.c | 8 +-
drivers/net/sfc/sfc_ef10_tx.c | 32 +-
drivers/net/sfc/sfc_ethdev.c | 42 +-
drivers/net/sfc/sfc_flow.c | 2 +-
drivers/net/sfc/sfc_port.c | 54 +-
drivers/net/sfc/sfc_rx.c | 52 +-
drivers/net/sfc/sfc_tx.c | 50 +-
drivers/net/softnic/rte_eth_softnic.c | 12 +-
drivers/net/szedata2/rte_eth_szedata2.c | 14 +-
drivers/net/tap/rte_eth_tap.c | 104 +-
| 2 +-
drivers/net/thunderx/nicvf_ethdev.c | 100 +-
drivers/net/thunderx/nicvf_ethdev.h | 42 +-
drivers/net/txgbe/txgbe_ethdev.c | 236 ++---
drivers/net/txgbe/txgbe_ethdev.h | 18 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 24 +-
drivers/net/txgbe/txgbe_fdir.c | 20 +-
drivers/net/txgbe/txgbe_flow.c | 2 +-
drivers/net/txgbe/txgbe_ipsec.c | 12 +-
drivers/net/txgbe/txgbe_pf.c | 34 +-
drivers/net/txgbe/txgbe_rxtx.c | 312 +++---
drivers/net/txgbe/txgbe_rxtx.h | 4 +-
drivers/net/txgbe/txgbe_tm.c | 16 +-
drivers/net/vhost/rte_eth_vhost.c | 16 +-
drivers/net/virtio/virtio_ethdev.c | 126 +--
drivers/net/vmxnet3/vmxnet3_ethdev.c | 74 +-
drivers/net/vmxnet3/vmxnet3_ethdev.h | 16 +-
drivers/net/vmxnet3/vmxnet3_rxtx.c | 16 +-
examples/bbdev_app/main.c | 6 +-
examples/bond/main.c | 14 +-
examples/distributor/main.c | 12 +-
examples/ethtool/ethtool-app/main.c | 2 +-
examples/ethtool/lib/rte_ethtool.c | 18 +-
.../pipeline_worker_generic.c | 16 +-
.../eventdev_pipeline/pipeline_worker_tx.c | 12 +-
examples/flow_classify/flow_classify.c | 4 +-
examples/flow_filtering/main.c | 16 +-
examples/ioat/ioatfwd.c | 8 +-
examples/ip_fragmentation/main.c | 14 +-
examples/ip_pipeline/link.c | 14 +-
examples/ip_reassembly/main.c | 20 +-
examples/ipsec-secgw/ipsec-secgw.c | 34 +-
examples/ipsec-secgw/sa.c | 8 +-
examples/ipv4_multicast/main.c | 8 +-
examples/kni/main.c | 12 +-
examples/l2fwd-crypto/main.c | 10 +-
examples/l2fwd-event/l2fwd_common.c | 10 +-
examples/l2fwd-event/main.c | 2 +-
examples/l2fwd-jobstats/main.c | 8 +-
examples/l2fwd-keepalive/main.c | 8 +-
examples/l2fwd/main.c | 8 +-
examples/l3fwd-acl/main.c | 20 +-
examples/l3fwd-graph/main.c | 16 +-
examples/l3fwd-power/main.c | 18 +-
examples/l3fwd/l3fwd_event.c | 4 +-
examples/l3fwd/main.c | 20 +-
examples/link_status_interrupt/main.c | 10 +-
.../client_server_mp/mp_server/init.c | 4 +-
examples/multi_process/symmetric_mp/main.c | 14 +-
examples/ntb/ntb_fwd.c | 6 +-
examples/packet_ordering/main.c | 4 +-
.../performance-thread/l3fwd-thread/main.c | 18 +-
examples/pipeline/obj.c | 14 +-
examples/ptpclient/ptpclient.c | 10 +-
examples/qos_meter/main.c | 16 +-
examples/qos_sched/init.c | 6 +-
examples/rxtx_callbacks/main.c | 8 +-
examples/server_node_efd/server/init.c | 8 +-
examples/skeleton/basicfwd.c | 4 +-
examples/vhost/main.c | 28 +-
examples/vm_power_manager/main.c | 6 +-
examples/vmdq/main.c | 20 +-
examples/vmdq_dcb/main.c | 40 +-
lib/ethdev/rte_ethdev.c | 187 ++--
lib/ethdev/rte_ethdev.h | 907 +++++++++++-------
lib/ethdev/rte_ethdev_core.h | 2 +-
lib/ethdev/rte_flow.h | 2 +-
lib/gso/rte_gso.c | 20 +-
lib/gso/rte_gso.h | 4 +-
lib/mbuf/rte_mbuf_core.h | 8 +-
lib/mbuf/rte_mbuf_dyn.h | 2 +-
337 files changed, 6520 insertions(+), 6295 deletions(-)
diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index a8e928fa9ff3..963b6aa5c589 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -757,11 +757,11 @@ show_port(void)
}
ret = rte_eth_dev_flow_ctrl_get(i, &fc_conf);
- if (ret == 0 && fc_conf.mode != RTE_FC_NONE) {
+ if (ret == 0 && fc_conf.mode != RTE_ETH_FC_NONE) {
printf("\t -- flow control mode %s%s high %u low %u pause %u%s%s\n",
- fc_conf.mode == RTE_FC_RX_PAUSE ? "rx " :
- fc_conf.mode == RTE_FC_TX_PAUSE ? "tx " :
- fc_conf.mode == RTE_FC_FULL ? "full" : "???",
+ fc_conf.mode == RTE_ETH_FC_RX_PAUSE ? "rx " :
+ fc_conf.mode == RTE_ETH_FC_TX_PAUSE ? "tx " :
+ fc_conf.mode == RTE_ETH_FC_FULL ? "full" : "???",
fc_conf.autoneg ? " auto" : "",
fc_conf.high_water,
fc_conf.low_water,
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index cc100650c21e..41e92143121b 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -668,14 +668,14 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
struct test_perf *t = evt_test_priv(test);
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
};
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 6ee530d4cdc9..96c8a5828364 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -176,12 +176,12 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
struct rte_eth_rxconf rx_conf;
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
};
@@ -199,7 +199,7 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
t->internal_port = 1;
RTE_ETH_FOREACH_DEV(i) {
@@ -224,7 +224,7 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT))
local_port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
ret = rte_eth_dev_info_get(i, &dev_info);
if (ret != 0) {
@@ -234,9 +234,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
}
/* Enable mbuf fast free if PMD has the capability. */
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rx_conf = dev_info.default_rxconf;
rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h
index a14d4e05e185..4249b6175b82 100644
--- a/app/test-flow-perf/config.h
+++ b/app/test-flow-perf/config.h
@@ -5,7 +5,7 @@
#define FLOW_ITEM_MASK(_x) (UINT64_C(1) << _x)
#define FLOW_ACTION_MASK(_x) (UINT64_C(1) << _x)
#define FLOW_ATTR_MASK(_x) (UINT64_C(1) << _x)
-#define GET_RSS_HF() (ETH_RSS_IP)
+#define GET_RSS_HF() (RTE_ETH_RSS_IP)
/* Configuration */
#define RXQ_NUM 4
diff --git a/app/test-pipeline/init.c b/app/test-pipeline/init.c
index fe37d63730c6..c73801904103 100644
--- a/app/test-pipeline/init.c
+++ b/app/test-pipeline/init.c
@@ -70,16 +70,16 @@ struct app_params app = {
static struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -178,7 +178,7 @@ app_ports_check_link(void)
RTE_LOG(INFO, USER1, "Port %u %s\n",
port,
link_status_text);
- if (link.link_status == ETH_LINK_DOWN)
+ if (link.link_status == RTE_ETH_LINK_DOWN)
all_ports_up = 0;
}
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 82253bc75110..d6db93557b95 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1490,51 +1490,51 @@ parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
int duplex;
if (!strcmp(duplexstr, "half")) {
- duplex = ETH_LINK_HALF_DUPLEX;
+ duplex = RTE_ETH_LINK_HALF_DUPLEX;
} else if (!strcmp(duplexstr, "full")) {
- duplex = ETH_LINK_FULL_DUPLEX;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
} else if (!strcmp(duplexstr, "auto")) {
- duplex = ETH_LINK_FULL_DUPLEX;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
} else {
fprintf(stderr, "Unknown duplex parameter\n");
return -1;
}
if (!strcmp(speedstr, "10")) {
- *speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
- ETH_LINK_SPEED_10M_HD : ETH_LINK_SPEED_10M;
+ *speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+ RTE_ETH_LINK_SPEED_10M_HD : RTE_ETH_LINK_SPEED_10M;
} else if (!strcmp(speedstr, "100")) {
- *speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
- ETH_LINK_SPEED_100M_HD : ETH_LINK_SPEED_100M;
+ *speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+ RTE_ETH_LINK_SPEED_100M_HD : RTE_ETH_LINK_SPEED_100M;
} else {
- if (duplex != ETH_LINK_FULL_DUPLEX) {
+ if (duplex != RTE_ETH_LINK_FULL_DUPLEX) {
fprintf(stderr, "Invalid speed/duplex parameters\n");
return -1;
}
if (!strcmp(speedstr, "1000")) {
- *speed = ETH_LINK_SPEED_1G;
+ *speed = RTE_ETH_LINK_SPEED_1G;
} else if (!strcmp(speedstr, "10000")) {
- *speed = ETH_LINK_SPEED_10G;
+ *speed = RTE_ETH_LINK_SPEED_10G;
} else if (!strcmp(speedstr, "25000")) {
- *speed = ETH_LINK_SPEED_25G;
+ *speed = RTE_ETH_LINK_SPEED_25G;
} else if (!strcmp(speedstr, "40000")) {
- *speed = ETH_LINK_SPEED_40G;
+ *speed = RTE_ETH_LINK_SPEED_40G;
} else if (!strcmp(speedstr, "50000")) {
- *speed = ETH_LINK_SPEED_50G;
+ *speed = RTE_ETH_LINK_SPEED_50G;
} else if (!strcmp(speedstr, "100000")) {
- *speed = ETH_LINK_SPEED_100G;
+ *speed = RTE_ETH_LINK_SPEED_100G;
} else if (!strcmp(speedstr, "200000")) {
- *speed = ETH_LINK_SPEED_200G;
+ *speed = RTE_ETH_LINK_SPEED_200G;
} else if (!strcmp(speedstr, "auto")) {
- *speed = ETH_LINK_SPEED_AUTONEG;
+ *speed = RTE_ETH_LINK_SPEED_AUTONEG;
} else {
fprintf(stderr, "Unknown speed parameter\n");
return -1;
}
}
- if (*speed != ETH_LINK_SPEED_AUTONEG)
- *speed |= ETH_LINK_SPEED_FIXED;
+ if (*speed != RTE_ETH_LINK_SPEED_AUTONEG)
+ *speed |= RTE_ETH_LINK_SPEED_FIXED;
return 0;
}
@@ -2185,33 +2185,33 @@ cmd_config_rss_parsed(void *parsed_result,
int ret;
if (!strcmp(res->value, "all"))
- rss_conf.rss_hf = ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP |
- ETH_RSS_TCP | ETH_RSS_UDP | ETH_RSS_SCTP |
- ETH_RSS_L2_PAYLOAD | ETH_RSS_L2TPV3 | ETH_RSS_ESP |
- ETH_RSS_AH | ETH_RSS_PFCP | ETH_RSS_GTPU |
- ETH_RSS_ECPRI;
+ rss_conf.rss_hf = RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP |
+ RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP |
+ RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP | RTE_ETH_RSS_GTPU |
+ RTE_ETH_RSS_ECPRI;
else if (!strcmp(res->value, "eth"))
- rss_conf.rss_hf = ETH_RSS_ETH;
+ rss_conf.rss_hf = RTE_ETH_RSS_ETH;
else if (!strcmp(res->value, "vlan"))
- rss_conf.rss_hf = ETH_RSS_VLAN;
+ rss_conf.rss_hf = RTE_ETH_RSS_VLAN;
else if (!strcmp(res->value, "ip"))
- rss_conf.rss_hf = ETH_RSS_IP;
+ rss_conf.rss_hf = RTE_ETH_RSS_IP;
else if (!strcmp(res->value, "udp"))
- rss_conf.rss_hf = ETH_RSS_UDP;
+ rss_conf.rss_hf = RTE_ETH_RSS_UDP;
else if (!strcmp(res->value, "tcp"))
- rss_conf.rss_hf = ETH_RSS_TCP;
+ rss_conf.rss_hf = RTE_ETH_RSS_TCP;
else if (!strcmp(res->value, "sctp"))
- rss_conf.rss_hf = ETH_RSS_SCTP;
+ rss_conf.rss_hf = RTE_ETH_RSS_SCTP;
else if (!strcmp(res->value, "ether"))
- rss_conf.rss_hf = ETH_RSS_L2_PAYLOAD;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2_PAYLOAD;
else if (!strcmp(res->value, "port"))
- rss_conf.rss_hf = ETH_RSS_PORT;
+ rss_conf.rss_hf = RTE_ETH_RSS_PORT;
else if (!strcmp(res->value, "vxlan"))
- rss_conf.rss_hf = ETH_RSS_VXLAN;
+ rss_conf.rss_hf = RTE_ETH_RSS_VXLAN;
else if (!strcmp(res->value, "geneve"))
- rss_conf.rss_hf = ETH_RSS_GENEVE;
+ rss_conf.rss_hf = RTE_ETH_RSS_GENEVE;
else if (!strcmp(res->value, "nvgre"))
- rss_conf.rss_hf = ETH_RSS_NVGRE;
+ rss_conf.rss_hf = RTE_ETH_RSS_NVGRE;
else if (!strcmp(res->value, "l3-pre32"))
rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE32;
else if (!strcmp(res->value, "l3-pre40"))
@@ -2225,44 +2225,44 @@ cmd_config_rss_parsed(void *parsed_result,
else if (!strcmp(res->value, "l3-pre96"))
rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE96;
else if (!strcmp(res->value, "l3-src-only"))
- rss_conf.rss_hf = ETH_RSS_L3_SRC_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L3_SRC_ONLY;
else if (!strcmp(res->value, "l3-dst-only"))
- rss_conf.rss_hf = ETH_RSS_L3_DST_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L3_DST_ONLY;
else if (!strcmp(res->value, "l4-src-only"))
- rss_conf.rss_hf = ETH_RSS_L4_SRC_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L4_SRC_ONLY;
else if (!strcmp(res->value, "l4-dst-only"))
- rss_conf.rss_hf = ETH_RSS_L4_DST_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L4_DST_ONLY;
else if (!strcmp(res->value, "l2-src-only"))
- rss_conf.rss_hf = ETH_RSS_L2_SRC_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2_SRC_ONLY;
else if (!strcmp(res->value, "l2-dst-only"))
- rss_conf.rss_hf = ETH_RSS_L2_DST_ONLY;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2_DST_ONLY;
else if (!strcmp(res->value, "l2tpv3"))
- rss_conf.rss_hf = ETH_RSS_L2TPV3;
+ rss_conf.rss_hf = RTE_ETH_RSS_L2TPV3;
else if (!strcmp(res->value, "esp"))
- rss_conf.rss_hf = ETH_RSS_ESP;
+ rss_conf.rss_hf = RTE_ETH_RSS_ESP;
else if (!strcmp(res->value, "ah"))
- rss_conf.rss_hf = ETH_RSS_AH;
+ rss_conf.rss_hf = RTE_ETH_RSS_AH;
else if (!strcmp(res->value, "pfcp"))
- rss_conf.rss_hf = ETH_RSS_PFCP;
+ rss_conf.rss_hf = RTE_ETH_RSS_PFCP;
else if (!strcmp(res->value, "pppoe"))
- rss_conf.rss_hf = ETH_RSS_PPPOE;
+ rss_conf.rss_hf = RTE_ETH_RSS_PPPOE;
else if (!strcmp(res->value, "gtpu"))
- rss_conf.rss_hf = ETH_RSS_GTPU;
+ rss_conf.rss_hf = RTE_ETH_RSS_GTPU;
else if (!strcmp(res->value, "ecpri"))
- rss_conf.rss_hf = ETH_RSS_ECPRI;
+ rss_conf.rss_hf = RTE_ETH_RSS_ECPRI;
else if (!strcmp(res->value, "mpls"))
- rss_conf.rss_hf = ETH_RSS_MPLS;
+ rss_conf.rss_hf = RTE_ETH_RSS_MPLS;
else if (!strcmp(res->value, "none"))
rss_conf.rss_hf = 0;
else if (!strcmp(res->value, "level-default")) {
- rss_hf &= (~ETH_RSS_LEVEL_MASK);
- rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_PMD_DEFAULT);
+ rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+ rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_PMD_DEFAULT);
} else if (!strcmp(res->value, "level-outer")) {
- rss_hf &= (~ETH_RSS_LEVEL_MASK);
- rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_OUTERMOST);
+ rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+ rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_OUTERMOST);
} else if (!strcmp(res->value, "level-inner")) {
- rss_hf &= (~ETH_RSS_LEVEL_MASK);
- rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_INNERMOST);
+ rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+ rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_INNERMOST);
} else if (!strcmp(res->value, "default"))
use_default = 1;
else if (isdigit(res->value[0]) && atoi(res->value) > 0 &&
@@ -3029,10 +3029,10 @@ cmd_set_rss_reta_parsed(void *parsed_result,
} else
printf("The reta size of port %d is %u\n",
res->port_id, dev_info.reta_size);
- if (dev_info.reta_size > ETH_RSS_RETA_SIZE_512) {
+ if (dev_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
fprintf(stderr,
"Currently do not support more than %u entries of redirection table\n",
- ETH_RSS_RETA_SIZE_512);
+ RTE_ETH_RSS_RETA_SIZE_512);
return;
}
@@ -3149,7 +3149,7 @@ cmd_showport_reta_parsed(void *parsed_result,
if (ret != 0)
return;
- max_reta_size = RTE_MIN(dev_info.reta_size, ETH_RSS_RETA_SIZE_512);
+ max_reta_size = RTE_MIN(dev_info.reta_size, RTE_ETH_RSS_RETA_SIZE_512);
if (res->size == 0 || res->size > max_reta_size) {
fprintf(stderr, "Invalid redirection table size: %u (1-%u)\n",
res->size, max_reta_size);
@@ -3289,7 +3289,7 @@ cmd_config_dcb_parsed(void *parsed_result,
return;
}
- if ((res->num_tcs != ETH_4_TCS) && (res->num_tcs != ETH_8_TCS)) {
+ if ((res->num_tcs != RTE_ETH_4_TCS) && (res->num_tcs != RTE_ETH_8_TCS)) {
fprintf(stderr,
"The invalid number of traffic class, only 4 or 8 allowed.\n");
return;
@@ -4293,9 +4293,9 @@ cmd_vlan_tpid_parsed(void *parsed_result,
enum rte_vlan_type vlan_type;
if (!strcmp(res->vlan_type, "inner"))
- vlan_type = ETH_VLAN_TYPE_INNER;
+ vlan_type = RTE_ETH_VLAN_TYPE_INNER;
else if (!strcmp(res->vlan_type, "outer"))
- vlan_type = ETH_VLAN_TYPE_OUTER;
+ vlan_type = RTE_ETH_VLAN_TYPE_OUTER;
else {
fprintf(stderr, "Unknown vlan type\n");
return;
@@ -4632,55 +4632,55 @@ csum_show(int port_id)
printf("Parse tunnel is %s\n",
(ports[port_id].parse_tunnel) ? "on" : "off");
printf("IP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
printf("UDP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
printf("TCP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
printf("SCTP checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
printf("Outer-Ip checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
printf("Outer-Udp checksum offload is %s\n",
- (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
/* display warnings if configuration is not supported by the NIC */
ret = eth_dev_info_get_print_err(port_id, &dev_info);
if (ret != 0)
return;
- if ((tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware IP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware UDP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware TCP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_SCTP_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware SCTP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
fprintf(stderr,
"Warning: hardware outer IP checksum enabled but not supported by port %d\n",
port_id);
}
- if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
== 0) {
fprintf(stderr,
"Warning: hardware outer UDP checksum enabled but not supported by port %d\n",
@@ -4730,8 +4730,8 @@ cmd_csum_parsed(void *parsed_result,
if (!strcmp(res->proto, "ip")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_IPV4_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
} else {
fprintf(stderr,
"IP checksum offload is not supported by port %u\n",
@@ -4739,8 +4739,8 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "udp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_UDP_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
} else {
fprintf(stderr,
"UDP checksum offload is not supported by port %u\n",
@@ -4748,8 +4748,8 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "tcp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_TCP_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
} else {
fprintf(stderr,
"TCP checksum offload is not supported by port %u\n",
@@ -4757,8 +4757,8 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "sctp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_SCTP_CKSUM)) {
- csum_offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) {
+ csum_offloads |= RTE_ETH_TX_OFFLOAD_SCTP_CKSUM;
} else {
fprintf(stderr,
"SCTP checksum offload is not supported by port %u\n",
@@ -4766,9 +4766,9 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "outer-ip")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
csum_offloads |=
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
} else {
fprintf(stderr,
"Outer IP checksum offload is not supported by port %u\n",
@@ -4776,9 +4776,9 @@ cmd_csum_parsed(void *parsed_result,
}
} else if (!strcmp(res->proto, "outer-udp")) {
if (hw == 0 || (dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
csum_offloads |=
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
} else {
fprintf(stderr,
"Outer UDP checksum offload is not supported by port %u\n",
@@ -4933,7 +4933,7 @@ cmd_tso_set_parsed(void *parsed_result,
return;
if ((ports[res->port_id].tso_segsz != 0) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
fprintf(stderr, "Error: TSO is not supported by port %d\n",
res->port_id);
return;
@@ -4941,11 +4941,11 @@ cmd_tso_set_parsed(void *parsed_result,
if (ports[res->port_id].tso_segsz == 0) {
ports[res->port_id].dev_conf.txmode.offloads &=
- ~DEV_TX_OFFLOAD_TCP_TSO;
+ ~RTE_ETH_TX_OFFLOAD_TCP_TSO;
printf("TSO for non-tunneled packets is disabled\n");
} else {
ports[res->port_id].dev_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_TCP_TSO;
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
printf("TSO segment size for non-tunneled packets is %d\n",
ports[res->port_id].tso_segsz);
}
@@ -4957,7 +4957,7 @@ cmd_tso_set_parsed(void *parsed_result,
return;
if ((ports[res->port_id].tso_segsz != 0) &&
- (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+ (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
fprintf(stderr,
"Warning: TSO enabled but not supported by port %d\n",
res->port_id);
@@ -5028,27 +5028,27 @@ check_tunnel_tso_nic_support(portid_t port_id)
if (eth_dev_info_get_print_err(port_id, &dev_info) != 0)
return dev_info;
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VXLAN_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO))
fprintf(stderr,
"Warning: VXLAN TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
fprintf(stderr,
"Warning: GRE TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPIP_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO))
fprintf(stderr,
"Warning: IPIP TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
fprintf(stderr,
"Warning: GENEVE TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IP_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IP_TNL_TSO))
fprintf(stderr,
"Warning: IP TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
- if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_TNL_TSO))
+ if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO))
fprintf(stderr,
"Warning: UDP TUNNEL TSO not supported therefore not enabled for port %d\n",
port_id);
@@ -5076,20 +5076,20 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
dev_info = check_tunnel_tso_nic_support(res->port_id);
if (ports[res->port_id].tunnel_tso_segsz == 0) {
ports[res->port_id].dev_conf.txmode.offloads &=
- ~(DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ ~(RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
printf("TSO for tunneled packets is disabled\n");
} else {
- uint64_t tso_offloads = (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ uint64_t tso_offloads = (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
ports[res->port_id].dev_conf.txmode.offloads |=
(tso_offloads & dev_info.tx_offload_capa);
@@ -5112,7 +5112,7 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
fprintf(stderr,
"Warning: csum parse_tunnel must be set so that tunneled packets are recognized\n");
if (!(ports[res->port_id].dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
fprintf(stderr,
"Warning: csum set outer-ip must be set to hw if outer L3 is IPv4; not necessary for IPv6\n");
}
@@ -7058,9 +7058,9 @@ cmd_link_flow_ctrl_show_parsed(void *parsed_result,
return;
}
- if (fc_conf.mode == RTE_FC_RX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+ if (fc_conf.mode == RTE_ETH_FC_RX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
rx_fc_en = true;
- if (fc_conf.mode == RTE_FC_TX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+ if (fc_conf.mode == RTE_ETH_FC_TX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
tx_fc_en = true;
printf("\n%s Flow control infos for port %-2d %s\n",
@@ -7338,12 +7338,12 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
/*
* Rx on/off, flow control is enabled/disabled on RX side. This can indicate
- * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+ * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
* Tx on/off, flow control is enabled/disabled on TX side. This can indicate
- * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+ * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
*/
static enum rte_eth_fc_mode rx_tx_onoff_2_lfc_mode[2][2] = {
- {RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+ {RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
};
/* Partial command line, retrieve current configuration */
@@ -7356,11 +7356,11 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
return;
}
- if ((fc_conf.mode == RTE_FC_RX_PAUSE) ||
- (fc_conf.mode == RTE_FC_FULL))
+ if ((fc_conf.mode == RTE_ETH_FC_RX_PAUSE) ||
+ (fc_conf.mode == RTE_ETH_FC_FULL))
rx_fc_en = 1;
- if ((fc_conf.mode == RTE_FC_TX_PAUSE) ||
- (fc_conf.mode == RTE_FC_FULL))
+ if ((fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ||
+ (fc_conf.mode == RTE_ETH_FC_FULL))
tx_fc_en = 1;
}
@@ -7428,12 +7428,12 @@ cmd_priority_flow_ctrl_set_parsed(void *parsed_result,
/*
* Rx on/off, flow control is enabled/disabled on RX side. This can indicate
- * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+ * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
* Tx on/off, flow control is enabled/disabled on TX side. This can indicate
- * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+ * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
*/
static enum rte_eth_fc_mode rx_tx_onoff_2_pfc_mode[2][2] = {
- {RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+ {RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
};
memset(&pfc_conf, 0, sizeof(struct rte_eth_pfc_conf));
@@ -8950,13 +8950,13 @@ cmd_set_vf_rxmode_parsed(void *parsed_result,
int is_on = (strcmp(res->on, "on") == 0) ? 1 : 0;
if (!strcmp(res->what,"rxmode")) {
if (!strcmp(res->mode, "AUPE"))
- vf_rxmode |= ETH_VMDQ_ACCEPT_UNTAG;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_UNTAG;
else if (!strcmp(res->mode, "ROPE"))
- vf_rxmode |= ETH_VMDQ_ACCEPT_HASH_UC;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_HASH_UC;
else if (!strcmp(res->mode, "BAM"))
- vf_rxmode |= ETH_VMDQ_ACCEPT_BROADCAST;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_BROADCAST;
else if (!strncmp(res->mode, "MPE",3))
- vf_rxmode |= ETH_VMDQ_ACCEPT_MULTICAST;
+ vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_MULTICAST;
}
RTE_SET_USED(is_on);
@@ -9356,7 +9356,7 @@ cmd_tunnel_udp_config_parsed(void *parsed_result,
int ret;
tunnel_udp.udp_port = res->udp_port;
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
if (!strcmp(res->what, "add"))
ret = rte_eth_dev_udp_tunnel_port_add(res->port_id,
@@ -9422,13 +9422,13 @@ cmd_cfg_tunnel_udp_port_parsed(void *parsed_result,
tunnel_udp.udp_port = res->udp_port;
if (!strcmp(res->tunnel_type, "vxlan")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
} else if (!strcmp(res->tunnel_type, "geneve")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_GENEVE;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_GENEVE;
} else if (!strcmp(res->tunnel_type, "vxlan-gpe")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN_GPE;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN_GPE;
} else if (!strcmp(res->tunnel_type, "ecpri")) {
- tunnel_udp.prot_type = RTE_TUNNEL_TYPE_ECPRI;
+ tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_ECPRI;
} else {
fprintf(stderr, "Invalid tunnel type\n");
return;
@@ -9543,20 +9543,20 @@ cmd_set_mirror_mask_parsed(void *parsed_result,
memset(&mr_conf, 0, sizeof(struct rte_eth_mirror_conf));
- unsigned int vlan_list[ETH_MIRROR_MAX_VLANS];
+ unsigned int vlan_list[RTE_ETH_MIRROR_MAX_VLANS];
mr_conf.dst_pool = res->dstpool_id;
if (!strcmp(res->what, "pool-mirror-up")) {
mr_conf.pool_mask = strtoull(res->value, NULL, 16);
- mr_conf.rule_type = ETH_MIRROR_VIRTUAL_POOL_UP;
+ mr_conf.rule_type = RTE_ETH_MIRROR_VIRTUAL_POOL_UP;
} else if (!strcmp(res->what, "pool-mirror-down")) {
mr_conf.pool_mask = strtoull(res->value, NULL, 16);
- mr_conf.rule_type = ETH_MIRROR_VIRTUAL_POOL_DOWN;
+ mr_conf.rule_type = RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN;
} else if (!strcmp(res->what, "vlan-mirror")) {
- mr_conf.rule_type = ETH_MIRROR_VLAN;
+ mr_conf.rule_type = RTE_ETH_MIRROR_VLAN;
nb_item = parse_item_list(res->value, "vlan",
- ETH_MIRROR_MAX_VLANS, vlan_list, 1);
+ RTE_ETH_MIRROR_MAX_VLANS, vlan_list, 1);
if (nb_item <= 0)
return;
@@ -9656,9 +9656,9 @@ cmd_set_mirror_link_parsed(void *parsed_result,
memset(&mr_conf, 0, sizeof(struct rte_eth_mirror_conf));
if (!strcmp(res->what, "uplink-mirror"))
- mr_conf.rule_type = ETH_MIRROR_UPLINK_PORT;
+ mr_conf.rule_type = RTE_ETH_MIRROR_UPLINK_PORT;
else
- mr_conf.rule_type = ETH_MIRROR_DOWNLINK_PORT;
+ mr_conf.rule_type = RTE_ETH_MIRROR_DOWNLINK_PORT;
mr_conf.dst_pool = res->dstpool_id;
@@ -11823,7 +11823,7 @@ cmd_set_macsec_offload_on_parsed(
if (ret != 0)
return;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
#ifdef RTE_NET_IXGBE
ret = rte_pmd_ixgbe_macsec_enable(port_id, en, rp);
#endif
@@ -11834,7 +11834,7 @@ cmd_set_macsec_offload_on_parsed(
switch (ret) {
case 0:
ports[port_id].dev_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MACSEC_INSERT;
+ RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
cmd_reconfig_device_queue(port_id, 1, 1);
break;
case -ENODEV:
@@ -11920,7 +11920,7 @@ cmd_set_macsec_offload_off_parsed(
if (ret != 0)
return;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
#ifdef RTE_NET_IXGBE
ret = rte_pmd_ixgbe_macsec_disable(port_id);
#endif
@@ -11928,7 +11928,7 @@ cmd_set_macsec_offload_off_parsed(
switch (ret) {
case 0:
ports[port_id].dev_conf.txmode.offloads &=
- ~DEV_TX_OFFLOAD_MACSEC_INSERT;
+ ~RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
cmd_reconfig_device_queue(port_id, 1, 1);
break;
case -ENODEV:
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 31d8ba1b913c..1b0b9ab6d445 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -86,60 +86,60 @@ static const struct {
};
const struct rss_type_info rss_type_table[] = {
- { "all", ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP | ETH_RSS_TCP |
- ETH_RSS_UDP | ETH_RSS_SCTP | ETH_RSS_L2_PAYLOAD |
- ETH_RSS_L2TPV3 | ETH_RSS_ESP | ETH_RSS_AH | ETH_RSS_PFCP |
- ETH_RSS_GTPU | ETH_RSS_ECPRI | ETH_RSS_MPLS},
+ { "all", RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_L2_PAYLOAD |
+ RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP | RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP |
+ RTE_ETH_RSS_GTPU | RTE_ETH_RSS_ECPRI | RTE_ETH_RSS_MPLS},
{ "none", 0 },
- { "eth", ETH_RSS_ETH },
- { "l2-src-only", ETH_RSS_L2_SRC_ONLY },
- { "l2-dst-only", ETH_RSS_L2_DST_ONLY },
- { "vlan", ETH_RSS_VLAN },
- { "s-vlan", ETH_RSS_S_VLAN },
- { "c-vlan", ETH_RSS_C_VLAN },
- { "ipv4", ETH_RSS_IPV4 },
- { "ipv4-frag", ETH_RSS_FRAG_IPV4 },
- { "ipv4-tcp", ETH_RSS_NONFRAG_IPV4_TCP },
- { "ipv4-udp", ETH_RSS_NONFRAG_IPV4_UDP },
- { "ipv4-sctp", ETH_RSS_NONFRAG_IPV4_SCTP },
- { "ipv4-other", ETH_RSS_NONFRAG_IPV4_OTHER },
- { "ipv6", ETH_RSS_IPV6 },
- { "ipv6-frag", ETH_RSS_FRAG_IPV6 },
- { "ipv6-tcp", ETH_RSS_NONFRAG_IPV6_TCP },
- { "ipv6-udp", ETH_RSS_NONFRAG_IPV6_UDP },
- { "ipv6-sctp", ETH_RSS_NONFRAG_IPV6_SCTP },
- { "ipv6-other", ETH_RSS_NONFRAG_IPV6_OTHER },
- { "l2-payload", ETH_RSS_L2_PAYLOAD },
- { "ipv6-ex", ETH_RSS_IPV6_EX },
- { "ipv6-tcp-ex", ETH_RSS_IPV6_TCP_EX },
- { "ipv6-udp-ex", ETH_RSS_IPV6_UDP_EX },
- { "port", ETH_RSS_PORT },
- { "vxlan", ETH_RSS_VXLAN },
- { "geneve", ETH_RSS_GENEVE },
- { "nvgre", ETH_RSS_NVGRE },
- { "ip", ETH_RSS_IP },
- { "udp", ETH_RSS_UDP },
- { "tcp", ETH_RSS_TCP },
- { "sctp", ETH_RSS_SCTP },
- { "tunnel", ETH_RSS_TUNNEL },
+ { "eth", RTE_ETH_RSS_ETH },
+ { "l2-src-only", RTE_ETH_RSS_L2_SRC_ONLY },
+ { "l2-dst-only", RTE_ETH_RSS_L2_DST_ONLY },
+ { "vlan", RTE_ETH_RSS_VLAN },
+ { "s-vlan", RTE_ETH_RSS_S_VLAN },
+ { "c-vlan", RTE_ETH_RSS_C_VLAN },
+ { "ipv4", RTE_ETH_RSS_IPV4 },
+ { "ipv4-frag", RTE_ETH_RSS_FRAG_IPV4 },
+ { "ipv4-tcp", RTE_ETH_RSS_NONFRAG_IPV4_TCP },
+ { "ipv4-udp", RTE_ETH_RSS_NONFRAG_IPV4_UDP },
+ { "ipv4-sctp", RTE_ETH_RSS_NONFRAG_IPV4_SCTP },
+ { "ipv4-other", RTE_ETH_RSS_NONFRAG_IPV4_OTHER },
+ { "ipv6", RTE_ETH_RSS_IPV6 },
+ { "ipv6-frag", RTE_ETH_RSS_FRAG_IPV6 },
+ { "ipv6-tcp", RTE_ETH_RSS_NONFRAG_IPV6_TCP },
+ { "ipv6-udp", RTE_ETH_RSS_NONFRAG_IPV6_UDP },
+ { "ipv6-sctp", RTE_ETH_RSS_NONFRAG_IPV6_SCTP },
+ { "ipv6-other", RTE_ETH_RSS_NONFRAG_IPV6_OTHER },
+ { "l2-payload", RTE_ETH_RSS_L2_PAYLOAD },
+ { "ipv6-ex", RTE_ETH_RSS_IPV6_EX },
+ { "ipv6-tcp-ex", RTE_ETH_RSS_IPV6_TCP_EX },
+ { "ipv6-udp-ex", RTE_ETH_RSS_IPV6_UDP_EX },
+ { "port", RTE_ETH_RSS_PORT },
+ { "vxlan", RTE_ETH_RSS_VXLAN },
+ { "geneve", RTE_ETH_RSS_GENEVE },
+ { "nvgre", RTE_ETH_RSS_NVGRE },
+ { "ip", RTE_ETH_RSS_IP },
+ { "udp", RTE_ETH_RSS_UDP },
+ { "tcp", RTE_ETH_RSS_TCP },
+ { "sctp", RTE_ETH_RSS_SCTP },
+ { "tunnel", RTE_ETH_RSS_TUNNEL },
{ "l3-pre32", RTE_ETH_RSS_L3_PRE32 },
{ "l3-pre40", RTE_ETH_RSS_L3_PRE40 },
{ "l3-pre48", RTE_ETH_RSS_L3_PRE48 },
{ "l3-pre56", RTE_ETH_RSS_L3_PRE56 },
{ "l3-pre64", RTE_ETH_RSS_L3_PRE64 },
{ "l3-pre96", RTE_ETH_RSS_L3_PRE96 },
- { "l3-src-only", ETH_RSS_L3_SRC_ONLY },
- { "l3-dst-only", ETH_RSS_L3_DST_ONLY },
- { "l4-src-only", ETH_RSS_L4_SRC_ONLY },
- { "l4-dst-only", ETH_RSS_L4_DST_ONLY },
- { "esp", ETH_RSS_ESP },
- { "ah", ETH_RSS_AH },
- { "l2tpv3", ETH_RSS_L2TPV3 },
- { "pfcp", ETH_RSS_PFCP },
- { "pppoe", ETH_RSS_PPPOE },
- { "gtpu", ETH_RSS_GTPU },
- { "ecpri", ETH_RSS_ECPRI },
- { "mpls", ETH_RSS_MPLS },
+ { "l3-src-only", RTE_ETH_RSS_L3_SRC_ONLY },
+ { "l3-dst-only", RTE_ETH_RSS_L3_DST_ONLY },
+ { "l4-src-only", RTE_ETH_RSS_L4_SRC_ONLY },
+ { "l4-dst-only", RTE_ETH_RSS_L4_DST_ONLY },
+ { "esp", RTE_ETH_RSS_ESP },
+ { "ah", RTE_ETH_RSS_AH },
+ { "l2tpv3", RTE_ETH_RSS_L2TPV3 },
+ { "pfcp", RTE_ETH_RSS_PFCP },
+ { "pppoe", RTE_ETH_RSS_PPPOE },
+ { "gtpu", RTE_ETH_RSS_GTPU },
+ { "ecpri", RTE_ETH_RSS_ECPRI },
+ { "mpls", RTE_ETH_RSS_MPLS },
{ NULL, 0 },
};
@@ -474,39 +474,39 @@ static void
device_infos_display_speeds(uint32_t speed_capa)
{
printf("\n\tDevice speed capability:");
- if (speed_capa == ETH_LINK_SPEED_AUTONEG)
+ if (speed_capa == RTE_ETH_LINK_SPEED_AUTONEG)
printf(" Autonegotiate (all speeds)");
- if (speed_capa & ETH_LINK_SPEED_FIXED)
+ if (speed_capa & RTE_ETH_LINK_SPEED_FIXED)
printf(" Disable autonegotiate (fixed speed) ");
- if (speed_capa & ETH_LINK_SPEED_10M_HD)
+ if (speed_capa & RTE_ETH_LINK_SPEED_10M_HD)
printf(" 10 Mbps half-duplex ");
- if (speed_capa & ETH_LINK_SPEED_10M)
+ if (speed_capa & RTE_ETH_LINK_SPEED_10M)
printf(" 10 Mbps full-duplex ");
- if (speed_capa & ETH_LINK_SPEED_100M_HD)
+ if (speed_capa & RTE_ETH_LINK_SPEED_100M_HD)
printf(" 100 Mbps half-duplex ");
- if (speed_capa & ETH_LINK_SPEED_100M)
+ if (speed_capa & RTE_ETH_LINK_SPEED_100M)
printf(" 100 Mbps full-duplex ");
- if (speed_capa & ETH_LINK_SPEED_1G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_1G)
printf(" 1 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_2_5G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_2_5G)
printf(" 2.5 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_5G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_5G)
printf(" 5 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_10G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_10G)
printf(" 10 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_20G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_20G)
printf(" 20 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_25G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_25G)
printf(" 25 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_40G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_40G)
printf(" 40 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_50G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_50G)
printf(" 50 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_56G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_56G)
printf(" 56 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_100G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_100G)
printf(" 100 Gbps ");
- if (speed_capa & ETH_LINK_SPEED_200G)
+ if (speed_capa & RTE_ETH_LINK_SPEED_200G)
printf(" 200 Gbps ");
}
@@ -636,9 +636,9 @@ port_infos_display(portid_t port_id)
printf("\nLink status: %s\n", (link.link_status) ? ("up") : ("down"));
printf("Link speed: %s\n", rte_eth_link_speed_to_str(link.link_speed));
- printf("Link duplex: %s\n", (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ printf("Link duplex: %s\n", (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
("full-duplex") : ("half-duplex"));
- printf("Autoneg status: %s\n", (link.link_autoneg == ETH_LINK_AUTONEG) ?
+ printf("Autoneg status: %s\n", (link.link_autoneg == RTE_ETH_LINK_AUTONEG) ?
("On") : ("Off"));
if (!rte_eth_dev_get_mtu(port_id, &mtu))
@@ -656,22 +656,22 @@ port_infos_display(portid_t port_id)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (vlan_offload >= 0){
printf("VLAN offload: \n");
- if (vlan_offload & ETH_VLAN_STRIP_OFFLOAD)
+ if (vlan_offload & RTE_ETH_VLAN_STRIP_OFFLOAD)
printf(" strip on, ");
else
printf(" strip off, ");
- if (vlan_offload & ETH_VLAN_FILTER_OFFLOAD)
+ if (vlan_offload & RTE_ETH_VLAN_FILTER_OFFLOAD)
printf("filter on, ");
else
printf("filter off, ");
- if (vlan_offload & ETH_VLAN_EXTEND_OFFLOAD)
+ if (vlan_offload & RTE_ETH_VLAN_EXTEND_OFFLOAD)
printf("extend on, ");
else
printf("extend off, ");
- if (vlan_offload & ETH_QINQ_STRIP_OFFLOAD)
+ if (vlan_offload & RTE_ETH_QINQ_STRIP_OFFLOAD)
printf("qinq strip on\n");
else
printf("qinq strip off\n");
@@ -1166,7 +1166,7 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
diag = rte_eth_dev_set_mtu(port_id, mtu);
if (diag)
fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
- else if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ else if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
/*
* Ether overhead in driver is equal to the difference of
* max_rx_pktlen and max_mtu in rte_eth_dev_info when the
@@ -1175,12 +1175,12 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
if (mtu > RTE_ETHER_MTU) {
rte_port->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
rte_port->dev_conf.rxmode.max_rx_pkt_len =
mtu + eth_overhead;
} else
rte_port->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
}
}
@@ -3118,7 +3118,7 @@ dcb_fwd_config_setup(void)
for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
fwd_lcores[lc_id]->stream_nb = 0;
fwd_lcores[lc_id]->stream_idx = sm_id;
- for (i = 0; i < ETH_MAX_VMDQ_POOL; i++) {
+ for (i = 0; i < RTE_ETH_MAX_VMDQ_POOL; i++) {
/* if the nb_queue is zero, means this tc is
* not enabled on the POOL
*/
@@ -4181,11 +4181,11 @@ vlan_extend_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_VLAN_EXTEND_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ vlan_offload |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
} else {
- vlan_offload &= ~ETH_VLAN_EXTEND_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
+ vlan_offload &= ~RTE_ETH_VLAN_EXTEND_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4211,11 +4211,11 @@ rx_vlan_strip_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
- vlan_offload &= ~ETH_VLAN_STRIP_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ vlan_offload &= ~RTE_ETH_VLAN_STRIP_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4256,11 +4256,11 @@ rx_vlan_filter_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_VLAN_FILTER_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ vlan_offload |= RTE_ETH_VLAN_FILTER_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
} else {
- vlan_offload &= ~ETH_VLAN_FILTER_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ vlan_offload &= ~RTE_ETH_VLAN_FILTER_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4286,11 +4286,11 @@ rx_vlan_qinq_strip_set(portid_t port_id, int on)
vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
if (on) {
- vlan_offload |= ETH_QINQ_STRIP_OFFLOAD;
- port_rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ vlan_offload |= RTE_ETH_QINQ_STRIP_OFFLOAD;
+ port_rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
} else {
- vlan_offload &= ~ETH_QINQ_STRIP_OFFLOAD;
- port_rx_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+ vlan_offload &= ~RTE_ETH_QINQ_STRIP_OFFLOAD;
+ port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
}
diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4360,7 +4360,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
return;
if (ports[port_id].dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_QINQ_INSERT) {
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT) {
fprintf(stderr, "Error, as QinQ has been enabled.\n");
return;
}
@@ -4369,7 +4369,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
if (ret != 0)
return;
- if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VLAN_INSERT) == 0) {
+ if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) == 0) {
fprintf(stderr,
"Error: vlan insert is not supported by port %d\n",
port_id);
@@ -4377,7 +4377,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
}
tx_vlan_reset(port_id);
- ports[port_id].dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+ ports[port_id].dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
ports[port_id].tx_vlan_id = vlan_id;
}
@@ -4396,7 +4396,7 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
if (ret != 0)
return;
- if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_QINQ_INSERT) == 0) {
+ if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) == 0) {
fprintf(stderr,
"Error: qinq insert not supported by port %d\n",
port_id);
@@ -4404,8 +4404,8 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
}
tx_vlan_reset(port_id);
- ports[port_id].dev_conf.txmode.offloads |= (DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT);
+ ports[port_id].dev_conf.txmode.offloads |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
ports[port_id].tx_vlan_id = vlan_id;
ports[port_id].tx_vlan_id_outer = vlan_id_outer;
}
@@ -4414,8 +4414,8 @@ void
tx_vlan_reset(portid_t port_id)
{
ports[port_id].dev_conf.txmode.offloads &=
- ~(DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT);
+ ~(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
ports[port_id].tx_vlan_id = 0;
ports[port_id].tx_vlan_id_outer = 0;
}
@@ -4821,7 +4821,7 @@ set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint16_t rate)
ret = eth_link_get_nowait_print_err(port_id, &link);
if (ret < 0)
return 1;
- if (link.link_speed != ETH_SPEED_NUM_UNKNOWN &&
+ if (link.link_speed != RTE_ETH_SPEED_NUM_UNKNOWN &&
rate > link.link_speed) {
fprintf(stderr,
"Invalid rate value:%u bigger than link speed: %u\n",
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 38cc256533b6..454a2d41c366 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -485,7 +485,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
if (info->l4_proto == IPPROTO_TCP && tso_segsz) {
ol_flags |= PKT_TX_IP_CKSUM;
} else {
- if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
ol_flags |= PKT_TX_IP_CKSUM;
} else {
ipv4_hdr->hdr_checksum = 0;
@@ -502,7 +502,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
udp_hdr = (struct rte_udp_hdr *)((char *)l3_hdr + info->l3_len);
/* do not recalculate udp cksum if it was 0 */
if (udp_hdr->dgram_cksum != 0) {
- if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
ol_flags |= PKT_TX_UDP_CKSUM;
} else {
udp_hdr->dgram_cksum = 0;
@@ -517,7 +517,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
tcp_hdr = (struct rte_tcp_hdr *)((char *)l3_hdr + info->l3_len);
if (tso_segsz)
ol_flags |= PKT_TX_TCP_SEG;
- else if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+ else if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
ol_flags |= PKT_TX_TCP_CKSUM;
} else {
tcp_hdr->cksum = 0;
@@ -532,7 +532,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
((char *)l3_hdr + info->l3_len);
/* sctp payload must be a multiple of 4 to be
* offloaded */
- if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
((ipv4_hdr->total_length & 0x3) == 0)) {
ol_flags |= PKT_TX_SCTP_CKSUM;
} else {
@@ -559,7 +559,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
ipv4_hdr->hdr_checksum = 0;
ol_flags |= PKT_TX_OUTER_IPV4;
- if (tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
ol_flags |= PKT_TX_OUTER_IP_CKSUM;
else
ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
@@ -576,7 +576,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
ol_flags |= PKT_TX_TCP_SEG;
/* Skip SW outer UDP checksum generation if HW supports it */
- if (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) {
if (info->outer_ethertype == _htons(RTE_ETHER_TYPE_IPV4))
udp_hdr->dgram_cksum
= rte_ipv4_phdr_cksum(ipv4_hdr, ol_flags);
@@ -959,9 +959,9 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
if (info.is_tunnel == 1) {
if (info.tunnel_tso_segsz ||
(tx_offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
(tx_offloads &
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
m->outer_l2_len = info.outer_l2_len;
m->outer_l3_len = info.outer_l3_len;
m->l2_len = info.l2_len;
@@ -1022,19 +1022,19 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
rte_be_to_cpu_16(info.outer_ethertype),
info.outer_l3_len);
/* dump tx packet info */
- if ((tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM)) ||
+ if ((tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) ||
info.tso_segsz != 0)
printf("tx: m->l2_len=%d m->l3_len=%d "
"m->l4_len=%d\n",
m->l2_len, m->l3_len, m->l4_len);
if (info.is_tunnel == 1) {
if ((tx_offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
(tx_offloads &
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
(tx_ol_flags & PKT_TX_OUTER_IPV6))
printf("tx: m->outer_l2_len=%d "
"m->outer_l3_len=%d\n",
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 9348618d0f8d..7d658d002cb6 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -100,11 +100,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
vlan_tci_outer = ports[fs->tx_port].tx_vlan_id_outer;
tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ol_flags |= PKT_TX_VLAN_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
ol_flags |= PKT_TX_QINQ_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
ol_flags |= PKT_TX_MACSEC;
for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 0568ea794d48..1d878ba0a694 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -72,11 +72,11 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
fs->rx_packets += nb_rx;
txp = &ports[fs->tx_port];
tx_offloads = txp->dev_conf.txmode.offloads;
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ol_flags = PKT_TX_VLAN_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
ol_flags |= PKT_TX_QINQ_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
ol_flags |= PKT_TX_MACSEC;
for (i = 0; i < nb_rx; i++) {
if (likely(i < nb_rx - 1))
diff --git a/app/test-pmd/macswap_common.h b/app/test-pmd/macswap_common.h
index 7e9a3590a436..7ade9a686b7c 100644
--- a/app/test-pmd/macswap_common.h
+++ b/app/test-pmd/macswap_common.h
@@ -10,11 +10,11 @@ ol_flags_init(uint64_t tx_offload)
{
uint64_t ol_flags = 0;
- ol_flags |= (tx_offload & DEV_TX_OFFLOAD_VLAN_INSERT) ?
+ ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) ?
PKT_TX_VLAN : 0;
- ol_flags |= (tx_offload & DEV_TX_OFFLOAD_QINQ_INSERT) ?
+ ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) ?
PKT_TX_QINQ : 0;
- ol_flags |= (tx_offload & DEV_TX_OFFLOAD_MACSEC_INSERT) ?
+ ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) ?
PKT_TX_MACSEC : 0;
return ol_flags;
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 7c13210f04aa..1d0187723532 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -475,29 +475,29 @@ parse_event_printing_config(const char *optarg, int enable)
static int
parse_link_speed(int n)
{
- uint32_t speed = ETH_LINK_SPEED_FIXED;
+ uint32_t speed = RTE_ETH_LINK_SPEED_FIXED;
switch (n) {
case 1000:
- speed |= ETH_LINK_SPEED_1G;
+ speed |= RTE_ETH_LINK_SPEED_1G;
break;
case 10000:
- speed |= ETH_LINK_SPEED_10G;
+ speed |= RTE_ETH_LINK_SPEED_10G;
break;
case 25000:
- speed |= ETH_LINK_SPEED_25G;
+ speed |= RTE_ETH_LINK_SPEED_25G;
break;
case 40000:
- speed |= ETH_LINK_SPEED_40G;
+ speed |= RTE_ETH_LINK_SPEED_40G;
break;
case 50000:
- speed |= ETH_LINK_SPEED_50G;
+ speed |= RTE_ETH_LINK_SPEED_50G;
break;
case 100000:
- speed |= ETH_LINK_SPEED_100G;
+ speed |= RTE_ETH_LINK_SPEED_100G;
break;
case 200000:
- speed |= ETH_LINK_SPEED_200G;
+ speed |= RTE_ETH_LINK_SPEED_200G;
break;
case 100:
case 10:
@@ -912,13 +912,13 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "pkt-filter-size")) {
if (!strcmp(optarg, "64K"))
fdir_conf.pballoc =
- RTE_FDIR_PBALLOC_64K;
+ RTE_ETH_FDIR_PBALLOC_64K;
else if (!strcmp(optarg, "128K"))
fdir_conf.pballoc =
- RTE_FDIR_PBALLOC_128K;
+ RTE_ETH_FDIR_PBALLOC_128K;
else if (!strcmp(optarg, "256K"))
fdir_conf.pballoc =
- RTE_FDIR_PBALLOC_256K;
+ RTE_ETH_FDIR_PBALLOC_256K;
else
rte_exit(EXIT_FAILURE, "pkt-filter-size %s invalid -"
" must be: 64K or 128K or 256K\n",
@@ -960,34 +960,34 @@ launch_args_parse(int argc, char** argv)
}
#endif
if (!strcmp(lgopts[opt_idx].name, "disable-crc-strip"))
- rx_offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
if (!strcmp(lgopts[opt_idx].name, "enable-lro"))
- rx_offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
if (!strcmp(lgopts[opt_idx].name, "enable-scatter"))
- rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
if (!strcmp(lgopts[opt_idx].name, "enable-rx-cksum"))
- rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
if (!strcmp(lgopts[opt_idx].name,
"enable-rx-timestamp"))
- rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
if (!strcmp(lgopts[opt_idx].name, "enable-hw-vlan"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-vlan-filter"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-vlan-strip"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-vlan-extend"))
- rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
if (!strcmp(lgopts[opt_idx].name,
"enable-hw-qinq-strip"))
- rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
if (!strcmp(lgopts[opt_idx].name, "enable-drop-en"))
rx_drop_en = 1;
@@ -1009,13 +1009,13 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "forward-mode"))
set_pkt_forwarding_mode(optarg);
if (!strcmp(lgopts[opt_idx].name, "rss-ip"))
- rss_hf = ETH_RSS_IP;
+ rss_hf = RTE_ETH_RSS_IP;
if (!strcmp(lgopts[opt_idx].name, "rss-udp"))
- rss_hf = ETH_RSS_UDP;
+ rss_hf = RTE_ETH_RSS_UDP;
if (!strcmp(lgopts[opt_idx].name, "rss-level-inner"))
- rss_hf |= ETH_RSS_LEVEL_INNERMOST;
+ rss_hf |= RTE_ETH_RSS_LEVEL_INNERMOST;
if (!strcmp(lgopts[opt_idx].name, "rss-level-outer"))
- rss_hf |= ETH_RSS_LEVEL_OUTERMOST;
+ rss_hf |= RTE_ETH_RSS_LEVEL_OUTERMOST;
if (!strcmp(lgopts[opt_idx].name, "rxq")) {
n = atoi(optarg);
if (n >= 0 && check_nb_rxq((queueid_t)n) == 0)
@@ -1386,12 +1386,12 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "rx-mq-mode")) {
char *end = NULL;
n = strtoul(optarg, &end, 16);
- if (n >= 0 && n <= ETH_MQ_RX_VMDQ_DCB_RSS)
+ if (n >= 0 && n <= RTE_ETH_MQ_RX_VMDQ_DCB_RSS)
rx_mq_mode = (enum rte_eth_rx_mq_mode)n;
else
rte_exit(EXIT_FAILURE,
"rx-mq-mode must be >= 0 and <= %d\n",
- ETH_MQ_RX_VMDQ_DCB_RSS);
+ RTE_ETH_MQ_RX_VMDQ_DCB_RSS);
}
if (!strcmp(lgopts[opt_idx].name, "record-core-cycles"))
record_core_cycles = 1;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 6cbe9ba3c893..30bf897d6da8 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -337,7 +337,7 @@ uint64_t noisy_lkup_num_reads_writes;
/*
* Receive Side Scaling (RSS) configuration.
*/
-uint64_t rss_hf = ETH_RSS_IP; /* RSS IP by default. */
+uint64_t rss_hf = RTE_ETH_RSS_IP; /* RSS IP by default. */
/*
* Port topology configuration
@@ -454,12 +454,12 @@ struct rte_eth_rxmode rx_mode = {
};
struct rte_eth_txmode tx_mode = {
- .offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
+ .offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
};
-struct rte_fdir_conf fdir_conf = {
+struct rte_eth_fdir_conf fdir_conf = {
.mode = RTE_FDIR_MODE_NONE,
- .pballoc = RTE_FDIR_PBALLOC_64K,
+ .pballoc = RTE_ETH_FDIR_PBALLOC_64K,
.status = RTE_FDIR_REPORT_STATUS,
.mask = {
.vlan_tci_mask = 0xFFEF,
@@ -513,7 +513,7 @@ uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
/*
* hexadecimal bitmask of RX mq mode can be enabled.
*/
-enum rte_eth_rx_mq_mode rx_mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
+enum rte_eth_rx_mq_mode rx_mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
/*
* Used to set forced link speed
@@ -1437,9 +1437,9 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
"Updating jumbo frame offload failed for port %u\n",
pid);
- if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(port->dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
port->dev_conf.txmode.offloads &=
- ~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Apply Rx offloads configuration */
for (i = 0; i < port->dev_info.max_rx_queues; i++)
@@ -1566,8 +1566,8 @@ init_config(void)
init_port_config();
- gso_types = DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_UDP_TSO;
+ gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO;
/*
* Records which Mbuf pool to use by each logical core, if needed.
*/
@@ -3154,7 +3154,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -3414,17 +3414,17 @@ update_jumbo_frame_offload(portid_t portid)
port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU + eth_overhead;
if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU + eth_overhead) {
- rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ rx_offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
on = false;
} else {
- if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
+ if ((port->dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) == 0) {
fprintf(stderr,
"Frame size (%u) is not supported by port %u\n",
port->dev_conf.rxmode.max_rx_pkt_len,
portid);
return -1;
}
- rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
on = true;
}
@@ -3436,16 +3436,16 @@ update_jumbo_frame_offload(portid_t portid)
/* Apply JUMBO_FRAME offload configuration to Rx queue(s) */
for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) {
if (on)
- port->rx_conf[qid].offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ port->rx_conf[qid].offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
- port->rx_conf[qid].offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ port->rx_conf[qid].offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
}
}
/* If JUMBO_FRAME is set MTU conversion done by ethdev layer,
* if unset do it here
*/
- if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) == 0) {
ret = rte_eth_dev_set_mtu(portid,
port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead);
if (ret)
@@ -3486,9 +3486,9 @@ init_port_config(void)
if( port->dev_conf.rx_adv_conf.rss_conf.rss_hf != 0)
port->dev_conf.rxmode.mq_mode =
(enum rte_eth_rx_mq_mode)
- (rx_mq_mode & ETH_MQ_RX_RSS);
+ (rx_mq_mode & RTE_ETH_MQ_RX_RSS);
else
- port->dev_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+ port->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
}
rxtx_port_config(port);
@@ -3575,9 +3575,9 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
vmdq_rx_conf->enable_default_pool = 0;
vmdq_rx_conf->default_pool = 0;
vmdq_rx_conf->nb_queue_pools =
- (num_tcs == ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+ (num_tcs == RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
vmdq_tx_conf->nb_queue_pools =
- (num_tcs == ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+ (num_tcs == RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
vmdq_rx_conf->nb_pool_maps = vmdq_rx_conf->nb_queue_pools;
for (i = 0; i < vmdq_rx_conf->nb_pool_maps; i++) {
@@ -3585,7 +3585,7 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
vmdq_rx_conf->pool_map[i].pools =
1 << (i % vmdq_rx_conf->nb_queue_pools);
}
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
vmdq_rx_conf->dcb_tc[i] = i % num_tcs;
vmdq_tx_conf->dcb_tc[i] = i % num_tcs;
}
@@ -3593,8 +3593,8 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
/* set DCB mode of RX and TX of multiple queues */
eth_conf->rxmode.mq_mode =
(enum rte_eth_rx_mq_mode)
- (rx_mq_mode & ETH_MQ_RX_VMDQ_DCB);
- eth_conf->txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+ (rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB);
+ eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
} else {
struct rte_eth_dcb_rx_conf *rx_conf =
ð_conf->rx_adv_conf.dcb_rx_conf;
@@ -3610,23 +3610,23 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
rx_conf->nb_tcs = num_tcs;
tx_conf->nb_tcs = num_tcs;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
rx_conf->dcb_tc[i] = i % num_tcs;
tx_conf->dcb_tc[i] = i % num_tcs;
}
eth_conf->rxmode.mq_mode =
(enum rte_eth_rx_mq_mode)
- (rx_mq_mode & ETH_MQ_RX_DCB_RSS);
+ (rx_mq_mode & RTE_ETH_MQ_RX_DCB_RSS);
eth_conf->rx_adv_conf.rss_conf = rss_conf;
- eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
+ eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_DCB;
}
if (pfc_en)
eth_conf->dcb_capability_en =
- ETH_DCB_PG_SUPPORT | ETH_DCB_PFC_SUPPORT;
+ RTE_ETH_DCB_PG_SUPPORT | RTE_ETH_DCB_PFC_SUPPORT;
else
- eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
+ eth_conf->dcb_capability_en = RTE_ETH_DCB_PG_SUPPORT;
return 0;
}
@@ -3653,7 +3653,7 @@ init_port_dcb_config(portid_t pid,
retval = get_eth_dcb_conf(pid, &port_conf, dcb_mode, num_tcs, pfc_en);
if (retval < 0)
return retval;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
/* re-configure the device . */
retval = rte_eth_dev_configure(pid, nb_rxq, nb_rxq, &port_conf);
@@ -3703,7 +3703,7 @@ init_port_dcb_config(portid_t pid,
rxtx_port_config(rte_port);
/* VLAN filter */
- rte_port->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rte_port->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
for (i = 0; i < RTE_DIM(vlan_tags); i++)
rx_vft_set(pid, vlan_tags[i], 1);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 16a3598e48c5..e4ad8a6a7cff 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -446,7 +446,7 @@ extern lcoreid_t bitrate_lcore_id;
extern uint8_t bitrate_enabled;
#endif
-extern struct rte_fdir_conf fdir_conf;
+extern struct rte_eth_fdir_conf fdir_conf;
/*
* Configuration of packet segments used to scatter received packets
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index aed820f5d340..5409d7a0deb0 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -352,11 +352,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
tx_offloads = txp->dev_conf.txmode.offloads;
vlan_tci = txp->tx_vlan_id;
vlan_tci_outer = txp->tx_vlan_id_outer;
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ol_flags = PKT_TX_VLAN_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
ol_flags |= PKT_TX_QINQ_PKT;
- if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
ol_flags |= PKT_TX_MACSEC;
/*
diff --git a/app/test/test_ethdev_link.c b/app/test/test_ethdev_link.c
index ee11987bae28..7c0ebec5bd4b 100644
--- a/app/test/test_ethdev_link.c
+++ b/app/test/test_ethdev_link.c
@@ -14,10 +14,10 @@ test_link_status_up_default(void)
{
int ret = 0;
struct rte_eth_link link_status = {
- .link_speed = ETH_SPEED_NUM_2_5G,
- .link_status = ETH_LINK_UP,
- .link_autoneg = ETH_LINK_AUTONEG,
- .link_duplex = ETH_LINK_FULL_DUPLEX
+ .link_speed = RTE_ETH_SPEED_NUM_2_5G,
+ .link_status = RTE_ETH_LINK_UP,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX
};
char text[RTE_ETH_LINK_MAX_STR_LEN];
@@ -27,9 +27,9 @@ test_link_status_up_default(void)
TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg",
text, strlen(text), "Invalid default link status string");
- link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
- link_status.link_autoneg = ETH_LINK_FIXED;
- link_status.link_speed = ETH_SPEED_NUM_10M,
+ link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link_status.link_autoneg = RTE_ETH_LINK_FIXED;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_10M,
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #2: %s\n", text);
RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -37,7 +37,7 @@ test_link_status_up_default(void)
text, strlen(text), "Invalid default link status "
"string with HDX");
- link_status.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #3: %s\n", text);
RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -45,7 +45,7 @@ test_link_status_up_default(void)
text, strlen(text), "Invalid default link status "
"string with HDX");
- link_status.link_speed = ETH_SPEED_NUM_NONE;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_NONE;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #3: %s\n", text);
RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -54,9 +54,9 @@ test_link_status_up_default(void)
"string with HDX");
/* test max str len */
- link_status.link_speed = ETH_SPEED_NUM_200G;
- link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
- link_status.link_autoneg = ETH_LINK_AUTONEG;
+ link_status.link_speed = RTE_ETH_SPEED_NUM_200G;
+ link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link_status.link_autoneg = RTE_ETH_LINK_AUTONEG;
ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
printf("Default link up #4:len = %d, %s\n", ret, text);
RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
@@ -69,10 +69,10 @@ test_link_status_down_default(void)
{
int ret = 0;
struct rte_eth_link link_status = {
- .link_speed = ETH_SPEED_NUM_2_5G,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_AUTONEG,
- .link_duplex = ETH_LINK_FULL_DUPLEX
+ .link_speed = RTE_ETH_SPEED_NUM_2_5G,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX
};
char text[RTE_ETH_LINK_MAX_STR_LEN];
@@ -90,9 +90,9 @@ test_link_status_invalid(void)
int ret = 0;
struct rte_eth_link link_status = {
.link_speed = 55555,
- .link_status = ETH_LINK_UP,
- .link_autoneg = ETH_LINK_AUTONEG,
- .link_duplex = ETH_LINK_FULL_DUPLEX
+ .link_status = RTE_ETH_LINK_UP,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX
};
char text[RTE_ETH_LINK_MAX_STR_LEN];
@@ -116,21 +116,21 @@ test_link_speed_all_values(void)
const char *value;
uint32_t link_speed;
} speed_str_map[] = {
- { "None", ETH_SPEED_NUM_NONE },
- { "10 Mbps", ETH_SPEED_NUM_10M },
- { "100 Mbps", ETH_SPEED_NUM_100M },
- { "1 Gbps", ETH_SPEED_NUM_1G },
- { "2.5 Gbps", ETH_SPEED_NUM_2_5G },
- { "5 Gbps", ETH_SPEED_NUM_5G },
- { "10 Gbps", ETH_SPEED_NUM_10G },
- { "20 Gbps", ETH_SPEED_NUM_20G },
- { "25 Gbps", ETH_SPEED_NUM_25G },
- { "40 Gbps", ETH_SPEED_NUM_40G },
- { "50 Gbps", ETH_SPEED_NUM_50G },
- { "56 Gbps", ETH_SPEED_NUM_56G },
- { "100 Gbps", ETH_SPEED_NUM_100G },
- { "200 Gbps", ETH_SPEED_NUM_200G },
- { "Unknown", ETH_SPEED_NUM_UNKNOWN },
+ { "None", RTE_ETH_SPEED_NUM_NONE },
+ { "10 Mbps", RTE_ETH_SPEED_NUM_10M },
+ { "100 Mbps", RTE_ETH_SPEED_NUM_100M },
+ { "1 Gbps", RTE_ETH_SPEED_NUM_1G },
+ { "2.5 Gbps", RTE_ETH_SPEED_NUM_2_5G },
+ { "5 Gbps", RTE_ETH_SPEED_NUM_5G },
+ { "10 Gbps", RTE_ETH_SPEED_NUM_10G },
+ { "20 Gbps", RTE_ETH_SPEED_NUM_20G },
+ { "25 Gbps", RTE_ETH_SPEED_NUM_25G },
+ { "40 Gbps", RTE_ETH_SPEED_NUM_40G },
+ { "50 Gbps", RTE_ETH_SPEED_NUM_50G },
+ { "56 Gbps", RTE_ETH_SPEED_NUM_56G },
+ { "100 Gbps", RTE_ETH_SPEED_NUM_100G },
+ { "200 Gbps", RTE_ETH_SPEED_NUM_200G },
+ { "Unknown", RTE_ETH_SPEED_NUM_UNKNOWN },
{ "Invalid", 50505 }
};
diff --git a/app/test/test_event_eth_rx_adapter.c b/app/test/test_event_eth_rx_adapter.c
index 9198767b4194..bb7917010d62 100644
--- a/app/test/test_event_eth_rx_adapter.c
+++ b/app/test/test_event_eth_rx_adapter.c
@@ -106,7 +106,7 @@ port_init_rx_intr(uint16_t port, struct rte_mempool *mp)
{
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
},
.intr_conf = {
.rxq = 1,
@@ -121,7 +121,7 @@ port_init(uint16_t port, struct rte_mempool *mp)
{
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
},
};
diff --git a/app/test/test_kni.c b/app/test/test_kni.c
index 96733554b6c4..40ab0d5c4ca4 100644
--- a/app/test/test_kni.c
+++ b/app/test/test_kni.c
@@ -74,7 +74,7 @@ static const struct rte_eth_txconf tx_conf = {
static const struct rte_eth_conf port_conf = {
.txmode = {
- .mq_mode = ETH_DCB_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 8a5c8310a8b4..23c024aa1b0c 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -134,12 +134,12 @@ static uint16_t vlan_id = 0x100;
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.split_hdr_size = 0,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 2c835fa7adc7..1556f14d6921 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -107,12 +107,12 @@ static struct link_bonding_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 5dac60ca1edd..93caaf986c2f 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -80,29 +80,29 @@ static struct link_bonding_rssconf_unittest_params test_params = {
*/
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
static struct rte_eth_conf rss_pmd_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IPV6,
+ .rss_hf = RTE_ETH_RSS_IPV6,
},
},
.lpbk_mode = 0,
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index 3a248d512c4a..da7b7ad1f7cc 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -62,12 +62,12 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 1, /* enable loopback */
};
@@ -156,7 +156,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -823,7 +823,7 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
/* bulk alloc rx, full-featured tx */
tx_conf.tx_rs_thresh = 32;
tx_conf.tx_free_thresh = 32;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
return 0;
} else if (!strcmp(mode, "hybrid")) {
/* bulk alloc rx, vector tx
@@ -832,13 +832,13 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
*/
tx_conf.tx_rs_thresh = 32;
tx_conf.tx_free_thresh = 32;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
return 0;
} else if (!strcmp(mode, "full")) {
/* full feature rx,tx pair */
tx_conf.tx_rs_thresh = 32;
tx_conf.tx_free_thresh = 32;
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
return 0;
}
diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c
index 7036f401ed95..6eecfa385537 100644
--- a/app/test/virtual_pmd.c
+++ b/app/test/virtual_pmd.c
@@ -53,7 +53,7 @@ static int virtual_ethdev_stop(struct rte_eth_dev *eth_dev __rte_unused)
void *pkt = NULL;
struct virtual_ethdev_private *prv = eth_dev->data->dev_private;
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 0;
while (rte_ring_dequeue(prv->rx_queue, &pkt) != -ENOENT)
rte_pktmbuf_free(pkt);
@@ -178,7 +178,7 @@ virtual_ethdev_link_update_success(struct rte_eth_dev *bonded_eth_dev,
int wait_to_complete __rte_unused)
{
if (!bonded_eth_dev->data->dev_started)
- bonded_eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ bonded_eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -574,9 +574,9 @@ virtual_ethdev_create(const char *name, struct rte_ether_addr *mac_addr,
eth_dev->data->nb_rx_queues = (uint16_t)1;
eth_dev->data->nb_tx_queues = (uint16_t)1;
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
- eth_dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
- eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
eth_dev->data->mac_addrs = rte_zmalloc(name, RTE_ETHER_ADDR_LEN, 0);
if (eth_dev->data->mac_addrs == NULL)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 53560d3830d7..1c0ea988f239 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -42,7 +42,7 @@ Features of the OCTEON cnxk SSO PMD are:
- HW managed packets enqueued from ethdev to eventdev exposed through event eth
RX adapter.
- N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
capability while maintaining receive packet order.
- Full Rx/Tx offload support defined through ethdev queue configuration.
- HW managed event vectorization on CN10K for packets enqueued from ethdev to
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 11fbebfcd243..0fa57abfa3e0 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -35,7 +35,7 @@ Features of the OCTEON TX2 SSO PMD are:
- HW managed packets enqueued from ethdev to eventdev exposed through event eth
RX adapter.
- N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
capability while maintaining receive packet order.
- Full Rx/Tx offload support defined through ethdev queue config.
diff --git a/doc/guides/howto/debug_troubleshoot.rst b/doc/guides/howto/debug_troubleshoot.rst
index 457ac441429a..13f30e39363e 100644
--- a/doc/guides/howto/debug_troubleshoot.rst
+++ b/doc/guides/howto/debug_troubleshoot.rst
@@ -71,7 +71,7 @@ RX Port and associated core :numref:`dtg_rx_rate`.
* Identify if port Speed and Duplex is matching to desired values with
``rte_eth_link_get``.
- * Check ``DEV_RX_OFFLOAD_JUMBO_FRAME`` is set with ``rte_eth_dev_info_get``.
+ * Check ``RTE_ETH_RX_OFFLOAD_JUMBO_FRAME`` is set with ``rte_eth_dev_info_get``.
* Check promiscuous mode if the drops do not occur for unique MAC address
with ``rte_eth_promiscuous_get``.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index e75f4fa9e3bc..77827e750195 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -877,22 +877,22 @@ processing. This improved performance is derived from a number of optimizations:
* TX: only the following reduced set of transmit offloads is supported in
vector mode::
- DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
* RX: only the following reduced set of receive offloads is supported in
vector mode (note that jumbo MTU is allowed only when the MTU setting
- does not require `DEV_RX_OFFLOAD_SCATTER` to be enabled)::
-
- DEV_RX_OFFLOAD_VLAN_STRIP
- DEV_RX_OFFLOAD_KEEP_CRC
- DEV_RX_OFFLOAD_JUMBO_FRAME
- DEV_RX_OFFLOAD_IPV4_CKSUM
- DEV_RX_OFFLOAD_UDP_CKSUM
- DEV_RX_OFFLOAD_TCP_CKSUM
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM
- DEV_RX_OFFLOAD_RSS_HASH
- DEV_RX_OFFLOAD_VLAN_FILTER
+ does not require `RTE_ETH_RX_OFFLOAD_SCATTER` to be enabled)::
+
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+ RTE_ETH_RX_OFFLOAD_RSS_HASH
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER
The BNXT Vector PMD is enabled in DPDK builds by default. The decision to enable
vector processing is made at run-time when the port is started; if no transmit
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index 91bdcd065a95..0209730b904a 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -432,7 +432,7 @@ Limitations
.. code-block:: console
vlan_offload = rte_eth_dev_get_vlan_offload(port);
- vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
+ vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
rte_eth_dev_set_vlan_offload(port, vlan_offload);
Another alternative is modify the adapter's ingress VLAN rewrite mode so that
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index a96e12d15515..7f7d6ae45658 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -30,7 +30,7 @@ Speed capabilities
Supports getting the speed capabilities that the current device is capable of.
-* **[provides] rte_eth_dev_info**: ``speed_capa:ETH_LINK_SPEED_*``.
+* **[provides] rte_eth_dev_info**: ``speed_capa:RTE_ETH_LINK_SPEED_*``.
* **[related] API**: ``rte_eth_dev_info_get()``.
@@ -101,11 +101,11 @@ Supports Rx interrupts.
Lock-free Tx queue
------------------
-If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+If a PMD advertises RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
* **[related] API**: ``rte_eth_tx_burst()``.
@@ -117,8 +117,8 @@ Fast mbuf free
Supports optimization for fast release of mbufs following successful Tx.
Requires that per queue, all mbufs come from the same mempool and has refcnt = 1.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
.. _nic_features_free_tx_mbuf_on_demand:
@@ -165,7 +165,7 @@ Jumbo frame
Supports Rx jumbo frames.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_JUMBO_FRAME``.
``dev_conf.rxmode.max_rx_pkt_len``.
* **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
* **[related] API**: ``rte_eth_dev_set_mtu()``.
@@ -178,7 +178,7 @@ Scattered Rx
Supports receiving segmented mbufs.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SCATTER``.
* **[implements] datapath**: ``Scattered Rx function``.
* **[implements] rte_eth_dev_data**: ``scattered_rx``.
* **[provides] eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -206,12 +206,12 @@ LRO
Supports Large Receive Offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
``dev_conf.rxmode.max_lro_pkt_size``.
* **[implements] datapath**: ``LRO functionality``.
* **[implements] rte_eth_dev_data**: ``lro``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
* **[provides] rte_eth_dev_info**: ``max_lro_pkt_size``.
@@ -222,12 +222,12 @@ TSO
Supports TCP Segmentation Offloading.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_TCP_TSO``.
* **[uses] rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
* **[uses] mbuf**: ``mbuf.ol_flags:`` ``PKT_TX_TCP_SEG``, ``PKT_TX_IPV4``, ``PKT_TX_IPV6``, ``PKT_TX_IP_CKSUM``.
* **[uses] mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
* **[implements] datapath**: ``TSO functionality``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_TCP_TSO,RTE_ETH_TX_OFFLOAD_UDP_TSO``.
.. _nic_features_promiscuous_mode:
@@ -288,9 +288,9 @@ RSS hash
Supports RSS hashing on RX.
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_RSS_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_RSS_FLAG``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.rss_conf``.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
* **[provides] rte_eth_dev_info**: ``flow_type_rss_offloads``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
@@ -303,7 +303,7 @@ Inner RSS
Supports RX RSS hashing on Inner headers.
* **[uses] rte_flow_action_rss**: ``level``.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
@@ -340,7 +340,7 @@ VMDq
Supports Virtual Machine Device Queues (VMDq).
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_VMDQ_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_VMDQ_FLAG``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_rx_conf``.
* **[uses] user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -363,7 +363,7 @@ DCB
Supports Data Center Bridging (DCB).
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_DCB_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_DCB_FLAG``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
* **[uses] user config**: ``dev_conf.rx_adv_conf.dcb_rx_conf``.
* **[uses] user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -379,7 +379,7 @@ VLAN filter
Supports filtering of a VLAN Tag identifier.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_FILTER``.
* **[implements] eth_dev_ops**: ``vlan_filter_set``.
* **[related] API**: ``rte_eth_dev_vlan_filter()``.
@@ -428,12 +428,12 @@ Supports inline crypto processing defined by rte_security library to perform cry
operations of security protocol while packet is received in NIC. NIC is not aware
of protocol operations. See Security library and PMD documentation for more details.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[implements] rte_security_ops**: ``session_create``, ``session_update``,
``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
* **[provides] rte_security_ops, capabilities_get**: ``action: RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO``
@@ -449,13 +449,13 @@ protocol processing for the security protocol (e.g. IPsec, MACSEC) while the
packet is received at NIC. The NIC is capable of understanding the security
protocol operations. See security library and PMD documentation for more details.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[implements] rte_security_ops**: ``session_create``, ``session_update``,
``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``get_userdata``,
``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
* **[provides] rte_security_ops, capabilities_get**: ``action: RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL``
@@ -469,7 +469,7 @@ CRC offload
Supports CRC stripping by hardware.
A PMD assumed to support CRC stripping by default. PMD should advertise if it supports keeping CRC.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_KEEP_CRC``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_KEEP_CRC``.
.. _nic_features_vlan_offload:
@@ -479,13 +479,13 @@ VLAN offload
Supports VLAN offload to hardware.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_STRIP,RTE_ETH_RX_OFFLOAD_VLAN_FILTER,RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_VLAN``, ``mbuf.vlan_tci``.
* **[implements] eth_dev_ops**: ``vlan_offload_set``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN`` ``mbuf.vlan_tci``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_VLAN_STRIP``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
* **[related] API**: ``rte_eth_dev_set_vlan_offload()``,
``rte_eth_dev_get_vlan_offload()``.
@@ -497,14 +497,14 @@ QinQ offload
Supports QinQ (queue in queue) offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ``, ``mbuf.vlan_tci_outer``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:PKT_RX_QINQ``,
``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN``
``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
.. _nic_features_fec:
@@ -518,7 +518,7 @@ information to correct the bit errors generated during data packet transmission
improves signal quality but also brings a delay to signals. This function can be enabled or disabled as required.
* **[implements] eth_dev_ops**: ``fec_get_capability``, ``fec_get``, ``fec_set``.
-* **[provides] rte_eth_fec_capa**: ``speed:ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
+* **[provides] rte_eth_fec_capa**: ``speed:RTE_ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
* **[related] API**: ``rte_eth_fec_get_capability()``, ``rte_eth_fec_get()``, ``rte_eth_fec_set()``.
@@ -529,16 +529,16 @@ L3 checksum offload
Supports L3 checksum offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
* **[uses] mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
.. _nic_features_l4_checksum_offload:
@@ -548,8 +548,8 @@ L4 checksum offload
Supports L4 checksum offload.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -557,8 +557,8 @@ Supports L4 checksum offload.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
.. _nic_features_hw_timestamp:
@@ -567,10 +567,10 @@ Timestamp offload
Supports Timestamp.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_TIMESTAMP``.
* **[provides] mbuf**: ``mbuf.timestamp``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
* **[related] eth_dev_ops**: ``read_clock``.
.. _nic_features_macsec_offload:
@@ -580,11 +580,11 @@ MACsec offload
Supports MACsec.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
.. _nic_features_inner_l3_checksum:
@@ -594,16 +594,16 @@ Inner L3 checksum
Supports inner packet L3 checksum.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
* **[uses] mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_IP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
.. _nic_features_inner_l4_checksum:
@@ -613,15 +613,15 @@ Inner L4 checksum
Supports inner packet L4 checksum.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_L4_CKSUM_UNKNOWN`` |
``PKT_RX_OUTER_L4_CKSUM_BAD`` | ``PKT_RX_OUTER_L4_CKSUM_GOOD`` | ``PKT_RX_OUTER_L4_CKSUM_INVALID``.
-* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses] rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
``mbuf.ol_flags:PKT_TX_OUTER_UDP_CKSUM``.
* **[uses] mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``,
- ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``,
+ ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
.. _nic_features_packet_type_parsing:
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index 7b8ef0e7823d..3dff65d89b6d 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -78,11 +78,11 @@ To enable via ``RX_OLFLAGS`` use ``RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y``.
To guarantee the constraint, the following capabilities in ``dev_conf.rxmode.offloads``
will be checked:
-* ``DEV_RX_OFFLOAD_VLAN_EXTEND``
+* ``RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``
-* ``DEV_RX_OFFLOAD_CHECKSUM``
+* ``RTE_ETH_RX_OFFLOAD_CHECKSUM``
-* ``DEV_RX_OFFLOAD_HEADER_SPLIT``
+* ``RTE_ETH_RX_OFFLOAD_HEADER_SPLIT``
* ``fdir_conf->mode``
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index fcea8151bf3c..e60e3b2a761d 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -222,21 +222,21 @@ For example,
* If the max number of VFs (max_vfs) is set in the range of 1 to 32:
If the number of Rx queues is specified as 4 (``--rxq=4`` in testpmd), then there are totally 32
- pools (ETH_32_POOLS), and each VF could have 4 Rx queues;
+ pools (RTE_ETH_32_POOLS), and each VF could have 4 Rx queues;
If the number of Rx queues is specified as 2 (``--rxq=2`` in testpmd), then there are totally 32
- pools (ETH_32_POOLS), and each VF could have 2 Rx queues;
+ pools (RTE_ETH_32_POOLS), and each VF could have 2 Rx queues;
* If the max number of VFs (max_vfs) is in the range of 33 to 64:
If the number of Rx queues in specified as 4 (``--rxq=4`` in testpmd), then error message is expected
as ``rxq`` is not correct at this case;
- If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (ETH_64_POOLS),
+ If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (RTE_ETH_64_POOLS),
and each VF have 2 Rx queues;
- On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
- or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
+ On host, to enable VF RSS functionality, rx mq mode should be set as RTE_ETH_MQ_RX_VMDQ_RSS
+ or RTE_ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
It also needs config VF RSS information like hash function, RSS key, RSS key length.
.. note::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index b82e63438285..24fbccc982f5 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -69,13 +69,13 @@ Other features are supported using optional MACRO configuration. They include:
To guarantee the constraint, capabilities in dev_conf.rxmode.offloads will be checked:
-* DEV_RX_OFFLOAD_VLAN_STRIP
+* RTE_ETH_RX_OFFLOAD_VLAN_STRIP
-* DEV_RX_OFFLOAD_VLAN_EXTEND
+* RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
-* DEV_RX_OFFLOAD_CHECKSUM
+* RTE_ETH_RX_OFFLOAD_CHECKSUM
-* DEV_RX_OFFLOAD_HEADER_SPLIT
+* RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
* dev_conf
@@ -143,13 +143,13 @@ l3fwd
~~~~~
When running l3fwd with vPMD, there is one thing to note.
-In the configuration, ensure that DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
+In the configuration, ensure that RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
Otherwise, by default, RX vPMD is disabled.
load_balancer
~~~~~~~~~~~~~
-As in the case of l3fwd, to enable vPMD, do NOT set DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
+As in the case of l3fwd, to enable vPMD, do NOT set RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
In addition, for improved performance, use -bsz "(32,32),(64,64),(32,32)" in load_balancer to avoid using the default burst size of 144.
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index bae73f42d882..6facb68b9545 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -371,7 +371,7 @@ Limitations
- CRC:
- - ``DEV_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
+ - ``RTE_ETH_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
for some NICs (such as ConnectX-6 Dx, ConnectX-6 Lx, and BlueField-2).
The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support.
@@ -607,7 +607,7 @@ Driver options
small-packet traffic.
When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
- user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
+ user-provided mbuf even if RTE_ETH_RX_OFFLOAD_SCATTER isn't enabled. PMD will
configure large stride size enough to accommodate max_rx_pkt_len as long as
device allows. Note that this can waste system memory compared to enabling Rx
scatter and multi-segment packet.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 3ce696b605d1..681010d9ed7d 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -275,7 +275,7 @@ An example utility for eBPF instruction generation in the format of C arrays wil
be added in next releases
TAP reports on supported RSS functions as part of dev_infos_get callback:
-``ETH_RSS_IP``, ``ETH_RSS_UDP`` and ``ETH_RSS_TCP``.
+``RTE_ETH_RSS_IP``, ``RTE_ETH_RSS_UDP`` and ``RTE_ETH_RSS_TCP``.
**Known limitation:** TAP supports all of the above hash functions together
and not in partial combinations.
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index 7bff0aef0b74..9b2c31a2f0bc 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -194,11 +194,11 @@ To segment an outgoing packet, an application must:
- the bit mask of required GSO types. The GSO library uses the same macros as
those that describe a physical device's TX offloading capabilities (i.e.
- ``DEV_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
+ ``RTE_ETH_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
wants to segment TCP/IPv4 packets, it should set gso_types to
- ``DEV_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
- supported for gso_types are ``DEV_TX_OFFLOAD_VXLAN_TNL_TSO``, and
- ``DEV_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
+ ``RTE_ETH_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
+ supported for gso_types are ``RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO``, and
+ ``RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
allowed.
- a flag, that indicates whether the IPv4 headers of output segments should
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 2f190b40e43a..dc6186a44ae2 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -137,7 +137,7 @@ a vxlan-encapsulated tcp packet:
mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
set out_ip checksum to 0 in the packet
- This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
- calculate checksum of out_ip and out_udp::
@@ -147,8 +147,8 @@ a vxlan-encapsulated tcp packet:
set out_ip checksum to 0 in the packet
set out_udp checksum to pseudo header using rte_ipv4_phdr_cksum()
- This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM
- and DEV_TX_OFFLOAD_UDP_CKSUM.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+ and RTE_ETH_TX_OFFLOAD_UDP_CKSUM.
- calculate checksum of in_ip::
@@ -158,7 +158,7 @@ a vxlan-encapsulated tcp packet:
set in_ip checksum to 0 in the packet
This is similar to case 1), but l2_len is different. It is supported
- on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+ on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
Note that it can only work if outer L4 checksum is 0.
- calculate checksum of in_ip and in_tcp::
@@ -170,8 +170,8 @@ a vxlan-encapsulated tcp packet:
set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
This is similar to case 2), but l2_len is different. It is supported
- on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM and
- DEV_TX_OFFLOAD_TCP_CKSUM.
+ on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM and
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM.
Note that it can only work if outer L4 checksum is 0.
- segment inner TCP::
@@ -185,7 +185,7 @@ a vxlan-encapsulated tcp packet:
set in_tcp checksum to pseudo header without including the IP
payload length using rte_ipv4_phdr_cksum()
- This is supported on hardware advertising DEV_TX_OFFLOAD_TCP_TSO.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_TCP_TSO.
Note that it can only work if outer L4 checksum is 0.
- calculate checksum of out_ip, in_ip, in_tcp::
@@ -200,8 +200,8 @@ a vxlan-encapsulated tcp packet:
set in_ip checksum to 0 in the packet
set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
- This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM,
- DEV_TX_OFFLOAD_UDP_CKSUM and DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM.
+ This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM and RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM.
The list of flags and their precise meaning is described in the mbuf API
documentation (rte_mbuf.h). Also refer to the testpmd source code
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 0d4ac77a7ccf..68312898448c 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -57,7 +57,7 @@ Whenever needed and appropriate, asynchronous communication should be introduced
Avoiding lock contention is a key issue in a multi-core environment.
To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable.
+For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
@@ -119,7 +119,7 @@ This is also true for the pipe-line model provided all logical cores used are lo
Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
-If the PMD is ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
+If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases:
* Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
@@ -127,7 +127,7 @@ concurrently on the same tx queue without SW lock. This PMD feature found in som
* In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus
enables more scaling as all workers can send the packets.
-See `Hardware Offload`_ for ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
+See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
Device Identification, Ownership and Configuration
--------------------------------------------------
@@ -311,7 +311,7 @@ The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get(
The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
Supported offloads can be either per-port or per-queue.
-Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
+Offloads are enabled using the existing ``RTE_ETH_TX_OFFLOAD_*`` or ``RTE_ETH_RX_OFFLOAD_*`` flags.
Any requested offloading by an application must be within the device capabilities.
Any offloading is disabled by default if it is not set in the parameter
``dev_conf->[rt]xmode.offloads`` to ``rte_eth_dev_configure()`` and
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2b42d5ec8c05..1bac8f04b96e 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1835,23 +1835,23 @@ only matching traffic goes through.
.. table:: RSS
- +---------------+---------------------------------------------+
- | Field | Value |
- +===============+=============================================+
- | ``func`` | RSS hash function to apply |
- +---------------+---------------------------------------------+
- | ``level`` | encapsulation level for ``types`` |
- +---------------+---------------------------------------------+
- | ``types`` | specific RSS hash types (see ``ETH_RSS_*``) |
- +---------------+---------------------------------------------+
- | ``key_len`` | hash key length in bytes |
- +---------------+---------------------------------------------+
- | ``queue_num`` | number of entries in ``queue`` |
- +---------------+---------------------------------------------+
- | ``key`` | hash key |
- +---------------+---------------------------------------------+
- | ``queue`` | queue indices to use |
- +---------------+---------------------------------------------+
+ +---------------+-------------------------------------------------+
+ | Field | Value |
+ +===============+=================================================+
+ | ``func`` | RSS hash function to apply |
+ +---------------+-------------------------------------------------+
+ | ``level`` | encapsulation level for ``types`` |
+ +---------------+-------------------------------------------------+
+ | ``types`` | specific RSS hash types (see ``RTE_ETH_RSS_*``) |
+ +---------------+-------------------------------------------------+
+ | ``key_len`` | hash key length in bytes |
+ +---------------+-------------------------------------------------+
+ | ``queue_num`` | number of entries in ``queue`` |
+ +---------------+-------------------------------------------------+
+ | ``key`` | hash key |
+ +---------------+-------------------------------------------------+
+ | ``queue`` | queue indices to use |
+ +---------------+-------------------------------------------------+
Action: ``PF``
^^^^^^^^^^^^^^
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index f72bc8a78fa6..e3bd451917f0 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -560,7 +560,7 @@ created by the application is attached to the security session by the API
For Inline Crypto and Inline protocol offload, device specific defined metadata is
updated in the mbuf using ``rte_security_set_pkt_metadata()`` if
-``DEV_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
+``RTE_ETH_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
For inline protocol offloaded ingress traffic, the application can register a
pointer, ``userdata`` , in the security session. When the packet is received,
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b0b..20159a1c9a90 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -58,22 +58,16 @@ Deprecation Notices
``RTE_ETH_FLOW_MAX`` is one sample of the mentioned case, adding a new flow
type will break the ABI because of ``flex_mask[RTE_ETH_FLOW_MAX]`` array
usage in following public struct hierarchy:
- ``rte_eth_fdir_flex_conf -> rte_fdir_conf -> rte_eth_conf (in the middle)``.
+ ``rte_eth_fdir_flex_conf -> rte_eth_fdir_conf -> rte_eth_conf (in the middle)``.
Need to identify this kind of usages and fix in 20.11, otherwise this blocks
us extending existing enum/define.
One solution can be using a fixed size array instead of ``.*MAX.*`` value.
-* ethdev: Will add ``RTE_ETH_`` prefix to all ethdev macros/enums in v21.11.
- Macros will be added for backward compatibility.
- Backward compatibility macros will be removed on v22.11.
- A few old backward compatibility macros from 2013 that does not have
- proper prefix will be removed on v21.11.
-
* ethdev: The flow director API, including ``rte_eth_conf.fdir_conf`` field,
and the related structures (``rte_fdir_*`` and ``rte_eth_fdir_*``),
will be removed in DPDK 20.11.
-* ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
+* ethdev: New offload flags ``RTE_ETH_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
This will allow application to enable or disable PMDs from updating
``rte_mbuf::hash::fdir``.
This scheme will allow PMDs to avoid writes to ``rte_mbuf`` fields on Rx and
@@ -98,7 +92,7 @@ Deprecation Notices
either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``.
An application may need to configure device for a specific Rx packet size, like for
- cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received packet size
+ cases ``RTE_ETH_RX_OFFLOAD_SCATTER`` is not supported and device received packet size
can't be bigger than Rx buffer size.
To cover these cases an application needs to know the device packet overhead to be
able to calculate the ``mtu`` corresponding to a Rx buffer size, for this
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 78171b25f96e..782574dd39d5 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -209,12 +209,12 @@ Where:
device will ensure the ordering. Ordering will be lost when tried in PARALLEL.
* ``--rxoffload MASK``: RX HW offload capabilities to enable/use on this port
- (bitmask of DEV_RX_OFFLOAD_* values). It is an optional parameter and
+ (bitmask of RTE_ETH_RX_OFFLOAD_* values). It is an optional parameter and
allows user to disable some of the RX HW offload capabilities.
By default all HW RX offloads are enabled.
* ``--txoffload MASK``: TX HW offload capabilities to enable/use on this port
- (bitmask of DEV_TX_OFFLOAD_* values). It is an optional parameter and
+ (bitmask of RTE_ETH_TX_OFFLOAD_* values). It is an optional parameter and
allows user to disable some of the TX HW offload capabilities.
By default all HW TX offloads are enabled.
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 6061674239f4..d7f5951d4639 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -526,7 +526,7 @@ The command line options are:
Set the hexadecimal bitmask of RX multi queue mode which can be enabled.
The default value is 0x7::
- ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG | ETH_MQ_RX_VMDQ_FLAG
+ RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG
* ``--record-core-cycles``
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index be52e6f72dab..a922988607ef 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -90,20 +90,20 @@ int dpaa_intr_disable(char *if_name);
struct usdpaa_ioctl_link_status_args_old {
/* network device node name */
char if_name[IF_NAME_MAX_LEN];
- /* link status(ETH_LINK_UP/DOWN) */
+ /* link status(RTE_ETH_LINK_UP/DOWN) */
int link_status;
};
struct usdpaa_ioctl_link_status_args {
/* network device node name */
char if_name[IF_NAME_MAX_LEN];
- /* link status(ETH_LINK_UP/DOWN) */
+ /* link status(RTE_ETH_LINK_UP/DOWN) */
int link_status;
- /* link speed (ETH_SPEED_NUM_)*/
+ /* link speed (RTE_ETH_SPEED_NUM_)*/
int link_speed;
- /* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+ /* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
int link_duplex;
- /* link autoneg (ETH_LINK_AUTONEG/FIXED)*/
+ /* link autoneg (RTE_ETH_LINK_AUTONEG/FIXED)*/
int link_autoneg;
};
@@ -111,16 +111,16 @@ struct usdpaa_ioctl_link_status_args {
struct usdpaa_ioctl_update_link_status_args {
/* network device node name */
char if_name[IF_NAME_MAX_LEN];
- /* link status(ETH_LINK_UP/DOWN) */
+ /* link status(RTE_ETH_LINK_UP/DOWN) */
int link_status;
};
struct usdpaa_ioctl_update_link_speed {
/* network device node name*/
char if_name[IF_NAME_MAX_LEN];
- /* link speed (ETH_SPEED_NUM_)*/
+ /* link speed (RTE_ETH_SPEED_NUM_)*/
int link_speed;
- /* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+ /* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
int link_duplex;
};
diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h
index bab25fd72eee..360bf75d3861 100644
--- a/drivers/common/cnxk/roc_npc.h
+++ b/drivers/common/cnxk/roc_npc.h
@@ -153,7 +153,7 @@ enum roc_npc_rss_hash_function {
struct roc_npc_action_rss {
enum roc_npc_rss_hash_function func;
uint32_t level;
- uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
uint32_t key_len; /**< Hash key length in bytes. */
uint32_t queue_num; /**< Number of entries in @p queue. */
const uint8_t *key; /**< Hash key. */
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index b73b211fd249..fb5d549e6227 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -91,10 +91,10 @@ static const char *valid_arguments[] = {
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(af_packet_logtype, NOTICE);
@@ -265,7 +265,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
static int
eth_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -295,7 +295,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
internals->tx_queue[i].sockfd = -1;
}
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -316,8 +316,8 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_queues = (uint16_t)internals->nb_queues;
dev_info->max_tx_queues = (uint16_t)internals->nb_queues;
dev_info->min_rx_bufsize = 0;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
return 0;
}
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index 74ffa4511284..dbf745852716 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -163,10 +163,10 @@ static const char * const valid_arguments[] = {
};
static const struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_AUTONEG
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG
};
/* List which tracks PMDs to facilitate sharing UMEMs across them. */
@@ -654,7 +654,7 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
static int
eth_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -663,7 +663,7 @@ eth_dev_start(struct rte_eth_dev *dev)
static int
eth_dev_stop(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index 377299b14c7a..b618cba3f023 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -736,14 +736,14 @@ eth_ark_dev_info_get(struct rte_eth_dev *dev,
.nb_align = ARK_TX_MIN_QUEUE}; /* power of 2 */
/* ARK PMD supports all line rates, how do we indicate that here ?? */
- dev_info->speed_capa = (ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G);
-
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_TIMESTAMP;
+ dev_info->speed_capa = (RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G);
+
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
return 0;
}
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 0ce35eb519e2..5af1cff3770e 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -154,21 +154,21 @@ static struct rte_pci_driver rte_atl_pmd = {
.remove = eth_atl_pci_remove,
};
-#define ATL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP \
- | DEV_RX_OFFLOAD_IPV4_CKSUM \
- | DEV_RX_OFFLOAD_UDP_CKSUM \
- | DEV_RX_OFFLOAD_TCP_CKSUM \
- | DEV_RX_OFFLOAD_JUMBO_FRAME \
- | DEV_RX_OFFLOAD_MACSEC_STRIP \
- | DEV_RX_OFFLOAD_VLAN_FILTER)
-
-#define ATL_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT \
- | DEV_TX_OFFLOAD_IPV4_CKSUM \
- | DEV_TX_OFFLOAD_UDP_CKSUM \
- | DEV_TX_OFFLOAD_TCP_CKSUM \
- | DEV_TX_OFFLOAD_TCP_TSO \
- | DEV_TX_OFFLOAD_MACSEC_INSERT \
- | DEV_TX_OFFLOAD_MULTI_SEGS)
+#define ATL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP \
+ | RTE_ETH_RX_OFFLOAD_IPV4_CKSUM \
+ | RTE_ETH_RX_OFFLOAD_UDP_CKSUM \
+ | RTE_ETH_RX_OFFLOAD_TCP_CKSUM \
+ | RTE_ETH_RX_OFFLOAD_JUMBO_FRAME \
+ | RTE_ETH_RX_OFFLOAD_MACSEC_STRIP \
+ | RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+
+#define ATL_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT \
+ | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM \
+ | RTE_ETH_TX_OFFLOAD_UDP_CKSUM \
+ | RTE_ETH_TX_OFFLOAD_TCP_CKSUM \
+ | RTE_ETH_TX_OFFLOAD_TCP_TSO \
+ | RTE_ETH_TX_OFFLOAD_MACSEC_INSERT \
+ | RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define SFP_EEPROM_SIZE 0x100
@@ -489,7 +489,7 @@ atl_dev_start(struct rte_eth_dev *dev)
/* set adapter started */
hw->adapter_stopped = 0;
- if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_INIT_LOG(ERR,
"Invalid link_speeds for port %u, fix speed not supported",
dev->data->port_id);
@@ -656,18 +656,18 @@ atl_dev_set_link_up(struct rte_eth_dev *dev)
uint32_t link_speeds = dev->data->dev_conf.link_speeds;
uint32_t speed_mask = 0;
- if (link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
speed_mask = hw->aq_nic_cfg->link_speed_msk;
} else {
- if (link_speeds & ETH_LINK_SPEED_10G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
speed_mask |= AQ_NIC_RATE_10G;
- if (link_speeds & ETH_LINK_SPEED_5G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_5G)
speed_mask |= AQ_NIC_RATE_5G;
- if (link_speeds & ETH_LINK_SPEED_1G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
speed_mask |= AQ_NIC_RATE_1G;
- if (link_speeds & ETH_LINK_SPEED_2_5G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_2_5G)
speed_mask |= AQ_NIC_RATE_2G5;
- if (link_speeds & ETH_LINK_SPEED_100M)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100M)
speed_mask |= AQ_NIC_RATE_100M;
}
@@ -1128,10 +1128,10 @@ atl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->reta_size = HW_ATL_B0_RSS_REDIRECTION_MAX;
dev_info->flow_type_rss_offloads = ATL_RSS_OFFLOAD_ALL;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
- dev_info->speed_capa |= ETH_LINK_SPEED_100M;
- dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
- dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
return 0;
}
@@ -1176,10 +1176,10 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
u32 fc = AQ_NIC_FC_OFF;
int err = 0;
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
link.link_speed = 0;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = hw->is_autoneg ? ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = hw->is_autoneg ? RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
memset(&old, 0, sizeof(old));
/* load old link status */
@@ -1199,8 +1199,8 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
return 0;
}
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_speed = hw->aq_link_status.mbps;
rte_eth_linkstatus_set(dev, &link);
@@ -1334,7 +1334,7 @@ atl_dev_link_status_print(struct rte_eth_dev *dev)
PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -1533,13 +1533,13 @@ atl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
hw->aq_fw_ops->get_flow_control(hw, &fc);
if (fc == AQ_NIC_FC_OFF)
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
else if ((fc & AQ_NIC_FC_RX) && (fc & AQ_NIC_FC_TX))
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (fc & AQ_NIC_FC_RX)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (fc & AQ_NIC_FC_TX)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
return 0;
}
@@ -1554,13 +1554,13 @@ atl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
if (hw->aq_fw_ops->set_flow_control == NULL)
return -ENOTSUP;
- if (fc_conf->mode == RTE_FC_NONE)
+ if (fc_conf->mode == RTE_ETH_FC_NONE)
hw->aq_nic_cfg->flow_control = AQ_NIC_FC_OFF;
- else if (fc_conf->mode == RTE_FC_RX_PAUSE)
+ else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
hw->aq_nic_cfg->flow_control = AQ_NIC_FC_RX;
- else if (fc_conf->mode == RTE_FC_TX_PAUSE)
+ else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
hw->aq_nic_cfg->flow_control = AQ_NIC_FC_TX;
- else if (fc_conf->mode == RTE_FC_FULL)
+ else if (fc_conf->mode == RTE_ETH_FC_FULL)
hw->aq_nic_cfg->flow_control = (AQ_NIC_FC_RX | AQ_NIC_FC_TX);
if (old_flow_control != hw->aq_nic_cfg->flow_control)
@@ -1731,14 +1731,14 @@ atl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
PMD_INIT_FUNC_TRACE();
- ret = atl_enable_vlan_filter(dev, mask & ETH_VLAN_FILTER_MASK);
+ ret = atl_enable_vlan_filter(dev, mask & RTE_ETH_VLAN_FILTER_MASK);
- cfg->vlan_strip = !!(mask & ETH_VLAN_STRIP_MASK);
+ cfg->vlan_strip = !!(mask & RTE_ETH_VLAN_STRIP_MASK);
for (i = 0; i < dev->data->nb_rx_queues; i++)
hw_atl_rpo_rx_desc_vlan_stripping_set(hw, cfg->vlan_strip, i);
- if (mask & ETH_VLAN_EXTEND_MASK)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK)
ret = -ENOTSUP;
return ret;
@@ -1754,10 +1754,10 @@ atl_vlan_tpid_set(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
PMD_INIT_FUNC_TRACE();
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
+ case RTE_ETH_VLAN_TYPE_INNER:
hw_atl_rpf_vlan_inner_etht_set(hw, tpid);
break;
- case ETH_VLAN_TYPE_OUTER:
+ case RTE_ETH_VLAN_TYPE_OUTER:
hw_atl_rpf_vlan_outer_etht_set(hw, tpid);
break;
default:
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index f547571b5c97..da993be35faa 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -11,15 +11,15 @@
#include "hw_atl/hw_atl_utils.h"
#define ATL_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define ATL_DEV_PRIVATE_TO_HW(adapter) \
(&((struct atl_adapter *)adapter)->hw)
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 7d367c9306ec..ddf110d6ce7e 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -145,10 +145,10 @@ atl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
rxq->rx_free_thresh = rx_conf->rx_free_thresh;
rxq->l3_csum_enabled = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_IPV4_CKSUM;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
rxq->l4_csum_enabled = dev->data->dev_conf.rxmode.offloads &
- (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
PMD_DRV_LOG(ERR, "PMD does not support KEEP_CRC offload");
/* allocate memory for the software ring */
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff5b..e870ced7e992 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -2011,9 +2011,9 @@ avp_dev_configure(struct rte_eth_dev *eth_dev)
/* Setup required number of queues */
_avp_set_queue_counts(eth_dev);
- mask = (ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK);
+ mask = (RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK);
ret = avp_vlan_offload_set(eth_dev, mask);
if (ret < 0) {
PMD_DRV_LOG(ERR, "VLAN offload set failed by host, ret=%d\n",
@@ -2153,8 +2153,8 @@ avp_dev_link_update(struct rte_eth_dev *eth_dev,
struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct rte_eth_link *link = ð_dev->data->dev_link;
- link->link_speed = ETH_SPEED_NUM_10G;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link->link_status = !!(avp->flags & AVP_F_LINKUP);
return -1;
@@ -2204,8 +2204,8 @@ avp_dev_info_get(struct rte_eth_dev *eth_dev,
dev_info->max_rx_pktlen = avp->max_rx_pkt_len;
dev_info->max_mac_addrs = AVP_MAX_MAC_ADDRS;
if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
}
return 0;
@@ -2218,9 +2218,9 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
struct rte_eth_conf *dev_conf = ð_dev->data->dev_conf;
uint64_t offloads = dev_conf->rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
avp->features |= RTE_AVP_FEATURE_VLAN_OFFLOAD;
else
avp->features &= ~RTE_AVP_FEATURE_VLAN_OFFLOAD;
@@ -2229,13 +2229,13 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
}
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
PMD_DRV_LOG(ERR, "VLAN filter offload not supported\n");
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
PMD_DRV_LOG(ERR, "VLAN extend offload not supported\n");
}
diff --git a/drivers/net/axgbe/axgbe_dev.c b/drivers/net/axgbe/axgbe_dev.c
index 786288a7b079..c0f033e06b15 100644
--- a/drivers/net/axgbe/axgbe_dev.c
+++ b/drivers/net/axgbe/axgbe_dev.c
@@ -840,11 +840,11 @@ static void axgbe_rss_options(struct axgbe_port *pdata)
pdata->rss_hf = rss_conf->rss_hf;
rss_hf = rss_conf->rss_hf;
- if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+ if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
- if (rss_hf & (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
- if (rss_hf & (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
}
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 9cb4818af11f..d4ba06c43a61 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -326,7 +326,7 @@ axgbe_dev_configure(struct rte_eth_dev *dev)
struct axgbe_port *pdata = dev->data->dev_private;
/* Checksum offload to hardware */
pdata->rx_csum_enable = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_CHECKSUM;
+ RTE_ETH_RX_OFFLOAD_CHECKSUM;
return 0;
}
@@ -335,9 +335,9 @@ axgbe_dev_rx_mq_config(struct rte_eth_dev *dev)
{
struct axgbe_port *pdata = dev->data->dev_private;
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
pdata->rss_enable = 1;
- else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+ else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
pdata->rss_enable = 0;
else
return -1;
@@ -383,7 +383,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state);
rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
- if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
max_pkt_len > pdata->rx_buf_size)
dev_data->scattered_rx = 1;
@@ -588,13 +588,13 @@ axgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
pdata->rss_hf = rss_conf->rss_hf & AXGBE_RSS_OFFLOAD;
- if (pdata->rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+ if (pdata->rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
if (pdata->rss_hf &
- (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
if (pdata->rss_hf &
- (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
/* Set the RSS options */
@@ -763,7 +763,7 @@ axgbe_dev_link_update(struct rte_eth_dev *dev,
link.link_status = pdata->phy_link;
link.link_speed = pdata->phy_speed;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
ret = rte_eth_linkstatus_set(dev, &link);
if (ret == -1)
PMD_DRV_LOG(ERR, "No change in link status\n");
@@ -1206,25 +1206,25 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_pktlen = AXGBE_RX_MAX_BUF_SIZE;
dev_info->max_mac_addrs = pdata->hw_feat.addn_mac + 1;
dev_info->max_hash_mac_addrs = pdata->hw_feat.hash_table_size;
- dev_info->speed_capa = ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_KEEP_CRC;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
if (pdata->hw_feat.rss) {
dev_info->flow_type_rss_offloads = AXGBE_RSS_OFFLOAD;
@@ -1261,13 +1261,13 @@ axgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
fc.autoneg = pdata->pause_autoneg;
if (pdata->rx_pause && pdata->tx_pause)
- fc.mode = RTE_FC_FULL;
+ fc.mode = RTE_ETH_FC_FULL;
else if (pdata->rx_pause)
- fc.mode = RTE_FC_RX_PAUSE;
+ fc.mode = RTE_ETH_FC_RX_PAUSE;
else if (pdata->tx_pause)
- fc.mode = RTE_FC_TX_PAUSE;
+ fc.mode = RTE_ETH_FC_TX_PAUSE;
else
- fc.mode = RTE_FC_NONE;
+ fc.mode = RTE_ETH_FC_NONE;
fc_conf->high_water = (1024 + (fc.low_water[0] << 9)) / 1024;
fc_conf->low_water = (1024 + (fc.high_water[0] << 9)) / 1024;
@@ -1297,13 +1297,13 @@ axgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
AXGMAC_IOWRITE(pdata, reg, reg_val);
fc.mode = fc_conf->mode;
- if (fc.mode == RTE_FC_FULL) {
+ if (fc.mode == RTE_ETH_FC_FULL) {
pdata->tx_pause = 1;
pdata->rx_pause = 1;
- } else if (fc.mode == RTE_FC_RX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
pdata->tx_pause = 0;
pdata->rx_pause = 1;
- } else if (fc.mode == RTE_FC_TX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
pdata->tx_pause = 1;
pdata->rx_pause = 0;
} else {
@@ -1385,15 +1385,15 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
fc.mode = pfc_conf->fc.mode;
- if (fc.mode == RTE_FC_FULL) {
+ if (fc.mode == RTE_ETH_FC_FULL) {
pdata->tx_pause = 1;
pdata->rx_pause = 1;
AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
- } else if (fc.mode == RTE_FC_RX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
pdata->tx_pause = 0;
pdata->rx_pause = 1;
AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
- } else if (fc.mode == RTE_FC_TX_PAUSE) {
+ } else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
pdata->tx_pause = 1;
pdata->rx_pause = 0;
AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 0);
@@ -1492,11 +1492,11 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
if (frame_size > AXGBE_ETH_MAX_LEN) {
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
val = 1;
} else {
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
val = 0;
}
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
@@ -1842,8 +1842,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
PMD_DRV_LOG(DEBUG, "EDVLP: qinq = 0x%x\n", qinq);
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
- PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_INNER\n");
+ case RTE_ETH_VLAN_TYPE_INNER:
+ PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_INNER\n");
if (qinq) {
if (tpid != 0x8100 && tpid != 0x88a8)
PMD_DRV_LOG(ERR,
@@ -1860,8 +1860,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
"Inner type not supported in single tag\n");
}
break;
- case ETH_VLAN_TYPE_OUTER:
- PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_OUTER\n");
+ case RTE_ETH_VLAN_TYPE_OUTER:
+ PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_OUTER\n");
if (qinq) {
PMD_DRV_LOG(DEBUG, "double tagging is enabled\n");
/*Enable outer VLAN tag*/
@@ -1878,11 +1878,11 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
"tag supported 0x8100/0x88A8\n");
}
break;
- case ETH_VLAN_TYPE_MAX:
- PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_MAX\n");
+ case RTE_ETH_VLAN_TYPE_MAX:
+ PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_MAX\n");
break;
- case ETH_VLAN_TYPE_UNKNOWN:
- PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_UNKNOWN\n");
+ case RTE_ETH_VLAN_TYPE_UNKNOWN:
+ PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_UNKNOWN\n");
break;
}
return 0;
@@ -1916,8 +1916,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, CSVL, 0);
AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, VLTI, 1);
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
PMD_DRV_LOG(DEBUG, "Strip ON for device = %s\n",
pdata->eth_dev->device->name);
pdata->hw_if.enable_rx_vlan_stripping(pdata);
@@ -1927,8 +1927,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
pdata->hw_if.disable_rx_vlan_stripping(pdata);
}
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
PMD_DRV_LOG(DEBUG, "Filter ON for device = %s\n",
pdata->eth_dev->device->name);
pdata->hw_if.enable_rx_vlan_filtering(pdata);
@@ -1938,14 +1938,14 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
pdata->hw_if.disable_rx_vlan_filtering(pdata);
}
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
PMD_DRV_LOG(DEBUG, "enabling vlan extended mode\n");
axgbe_vlan_extend_enable(pdata);
/* Set global registers with default ethertype*/
- axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+ axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
RTE_ETHER_TYPE_VLAN);
- axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+ axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
RTE_ETHER_TYPE_VLAN);
} else {
PMD_DRV_LOG(DEBUG, "disabling vlan extended mode\n");
diff --git a/drivers/net/axgbe/axgbe_ethdev.h b/drivers/net/axgbe/axgbe_ethdev.h
index a6226729fe4d..0a3e1c59df1a 100644
--- a/drivers/net/axgbe/axgbe_ethdev.h
+++ b/drivers/net/axgbe/axgbe_ethdev.h
@@ -97,12 +97,12 @@
/* Receive Side Scaling */
#define AXGBE_RSS_OFFLOAD ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define AXGBE_RSS_HASH_KEY_SIZE 40
#define AXGBE_RSS_MAX_TABLE_SIZE 256
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 4f98e695ae74..59fa9175aded 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -597,7 +597,7 @@ static void axgbe_an73_state_machine(struct axgbe_port *pdata)
pdata->an_int = 0;
axgbe_an73_clear_interrupts(pdata);
pdata->eth_dev->data->dev_link.link_status =
- ETH_LINK_DOWN;
+ RTE_ETH_LINK_DOWN;
} else if (pdata->an_state == AXGBE_AN_ERROR) {
PMD_DRV_LOG(ERR, "error during auto-negotiation, state=%u\n",
cur_state);
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index 33f709a6bb02..baa17a5fb43f 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -75,7 +75,7 @@ int axgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
(DMA_CH_INC * rxq->queue_id));
rxq->dma_tail_reg = (volatile uint32_t *)((uint8_t *)rxq->dma_regs +
DMA_CH_RDTR_LO);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -286,7 +286,7 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
mbuf->vlan_tci =
AXGMAC_GET_BITS_LE(desc->write.desc0,
RX_NORMAL_DESC0, OVT);
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
else
mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
@@ -430,7 +430,7 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
mbuf->vlan_tci =
AXGMAC_GET_BITS_LE(desc->write.desc0,
RX_NORMAL_DESC0, OVT);
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
else
mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 463886f17a58..14d91f868cd8 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -94,14 +94,14 @@ bnx2x_link_update(struct rte_eth_dev *dev)
link.link_speed = sc->link_vars.line_speed;
switch (sc->link_vars.duplex) {
case DUPLEX_FULL:
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case DUPLEX_HALF:
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
}
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
link.link_status = sc->link_vars.link_up;
return rte_eth_linkstatus_set(dev, &link);
@@ -181,7 +181,7 @@ bnx2x_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE(sc);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len;
dev->data->mtu = sc->mtu;
}
@@ -412,7 +412,7 @@ bnx2xvf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_comple
if (sc->old_bulletin.valid_bitmap & (1 << CHANNEL_DOWN)) {
PMD_DRV_LOG(ERR, sc, "PF indicated channel is down."
"VF device is no longer operational");
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
}
return ret;
@@ -538,8 +538,8 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_rx_bufsize = BNX2X_MIN_RX_BUF_SIZE;
dev_info->max_rx_pktlen = BNX2X_MAX_RX_PKT_LEN;
dev_info->max_mac_addrs = BNX2X_MAX_MAC_ADDRS;
- dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
@@ -675,7 +675,7 @@ bnx2x_common_dev_init(struct rte_eth_dev *eth_dev, int is_vf)
bnx2x_load_firmware(sc);
assert(sc->firmware);
- if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
sc->udp_rss = 1;
sc->rx_budget = BNX2X_RX_BUDGET;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 494a1eff3700..7e313c2fb5af 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -569,40 +569,40 @@ struct bnxt_rep_info {
#define BNXT_FW_STATUS_SHUTDOWN 0x100000
#define BNXT_ETH_RSS_SUPPORT ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_LEVEL_MASK)
-
-#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | \
- DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_KEEP_CRC | \
- DEV_RX_OFFLOAD_VLAN_EXTEND | \
- DEV_RX_OFFLOAD_TCP_LRO | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_LEVEL_MASK)
+
+#define BNXT_DEV_TX_OFFLOAD_SUPPORT (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define BNXT_DEV_RX_OFFLOAD_SUPPORT (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC | \
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+ RTE_ETH_RX_OFFLOAD_TCP_LRO | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define BNXT_HWRM_SHORT_REQ_LEN sizeof(struct hwrm_short_input)
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index de34a2f0bb2d..99d4953305e3 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -426,7 +426,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
goto err_out;
/* Alloc RSS context only if RSS mode is enabled */
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
int j, nr_ctxs = bnxt_rss_ctxts(bp);
/* RSS table size in Thor is 512.
@@ -458,7 +458,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
* setting is not available at this time, it will not be
* configured correctly in the CFA.
*/
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
vnic->vlan_strip = true;
else
vnic->vlan_strip = false;
@@ -493,7 +493,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
bnxt_hwrm_vnic_plcmode_cfg(bp, vnic);
rc = bnxt_hwrm_vnic_tpa_cfg(bp, vnic,
- (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) ?
+ (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) ?
true : false);
if (rc)
goto err_out;
@@ -738,11 +738,11 @@ static int bnxt_start_nic(struct bnxt *bp)
if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) {
bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
bp->flags |= BNXT_FLAG_JUMBO;
} else {
bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
bp->flags &= ~BNXT_FLAG_JUMBO;
}
@@ -908,35 +908,35 @@ uint32_t bnxt_get_speed_capabilities(struct bnxt *bp)
link_speed = bp->link_info->support_pam4_speeds;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB)
- speed_capa |= ETH_LINK_SPEED_100M;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100MBHD)
- speed_capa |= ETH_LINK_SPEED_100M_HD;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_1GB)
- speed_capa |= ETH_LINK_SPEED_1G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_2_5GB)
- speed_capa |= ETH_LINK_SPEED_2_5G;
+ speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_10GB)
- speed_capa |= ETH_LINK_SPEED_10G;
+ speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_20GB)
- speed_capa |= ETH_LINK_SPEED_20G;
+ speed_capa |= RTE_ETH_LINK_SPEED_20G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_25GB)
- speed_capa |= ETH_LINK_SPEED_25G;
+ speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_40GB)
- speed_capa |= ETH_LINK_SPEED_40G;
+ speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_50GB)
- speed_capa |= ETH_LINK_SPEED_50G;
+ speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100GB)
- speed_capa |= ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_50G)
- speed_capa |= ETH_LINK_SPEED_50G;
+ speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_100G)
- speed_capa |= ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_200G)
- speed_capa |= ETH_LINK_SPEED_200G;
+ speed_capa |= RTE_ETH_LINK_SPEED_200G;
if (bp->link_info->auto_mode ==
HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE)
- speed_capa |= ETH_LINK_SPEED_FIXED;
+ speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
return speed_capa;
}
@@ -980,8 +980,8 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT |
dev_info->tx_queue_offload_capa;
dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
@@ -1030,8 +1030,8 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
*/
/* VMDq resources */
- vpool = 64; /* ETH_64_POOLS */
- vrxq = 128; /* ETH_VMDQ_DCB_NUM_QUEUES */
+ vpool = 64; /* RTE_ETH_64_POOLS */
+ vrxq = 128; /* RTE_ETH_VMDQ_DCB_NUM_QUEUES */
for (i = 0; i < 4; vpool >>= 1, i++) {
if (max_vnics > vpool) {
for (j = 0; j < 5; vrxq >>= 1, j++) {
@@ -1126,18 +1126,18 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
(uint32_t)(eth_dev->data->nb_rx_queues) > bp->max_ring_grps)
goto resource_error;
- if (!(eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) &&
+ if (!(eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) &&
bp->max_vnics < eth_dev->data->nb_rx_queues)
goto resource_error;
bp->rx_cp_nr_rings = bp->rx_nr_rings;
bp->tx_cp_nr_rings = bp->tx_nr_rings;
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rx_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
eth_dev->data->mtu =
eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE *
@@ -1168,7 +1168,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
PMD_DRV_LOG(INFO, "Port %d Link Up - speed %u Mbps - %s\n",
eth_dev->data->port_id,
(uint32_t)link->link_speed,
- (link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ (link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
("full-duplex") : ("half-duplex\n"));
else
PMD_DRV_LOG(INFO, "Port %d Link Down\n",
@@ -1184,10 +1184,10 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
uint16_t buf_size;
int i;
- if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return 1;
- if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO)
+ if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
return 1;
for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1232,16 +1232,16 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
* a limited subset have been enabled.
*/
if (eth_dev->data->dev_conf.rxmode.offloads &
- ~(DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_VLAN_FILTER))
+ ~(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER))
goto use_scalar_rx;
#if defined(RTE_ARCH_X86) && defined(CC_AVX2_SUPPORT)
@@ -1293,7 +1293,7 @@ bnxt_transmit_function(struct rte_eth_dev *eth_dev)
* or tx offloads.
*/
if (eth_dev->data->scattered_rx ||
- (offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) ||
+ (offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) ||
BNXT_TRUFLOW_EN(bp))
goto use_scalar_tx;
@@ -1594,10 +1594,10 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
bnxt_link_update_op(eth_dev, 1);
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
- vlan_mask |= ETH_VLAN_FILTER_MASK;
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- vlan_mask |= ETH_VLAN_STRIP_MASK;
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+ vlan_mask |= RTE_ETH_VLAN_FILTER_MASK;
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ vlan_mask |= RTE_ETH_VLAN_STRIP_MASK;
rc = bnxt_vlan_offload_set_op(eth_dev, vlan_mask);
if (rc)
goto error;
@@ -1819,8 +1819,8 @@ int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete)
/* Retrieve link info from hardware */
rc = bnxt_get_hwrm_link_config(bp, &new);
if (rc) {
- new.link_speed = ETH_LINK_SPEED_100M;
- new.link_duplex = ETH_LINK_FULL_DUPLEX;
+ new.link_speed = RTE_ETH_LINK_SPEED_100M;
+ new.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR,
"Failed to retrieve link rc = 0x%x!\n", rc);
goto out;
@@ -2014,7 +2014,7 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
if (!vnic->rss_table)
return -EINVAL;
- if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+ if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
return -EINVAL;
if (reta_size != tbl_size) {
@@ -2120,7 +2120,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
* If RSS enablement were different than dev_configure,
* then return -EINVAL
*/
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
if (!rss_conf->rss_hf)
PMD_DRV_LOG(ERR, "Hash type NONE\n");
} else {
@@ -2138,7 +2138,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
vnic->hash_type = bnxt_rte_to_hwrm_hash_types(rss_conf->rss_hf);
vnic->hash_mode =
bnxt_rte_to_hwrm_hash_level(bp, rss_conf->rss_hf,
- ETH_RSS_LEVEL(rss_conf->rss_hf));
+ RTE_ETH_RSS_LEVEL(rss_conf->rss_hf));
/*
* If hashkey is not specified, use the previously configured
@@ -2183,30 +2183,30 @@ static int bnxt_rss_hash_conf_get_op(struct rte_eth_dev *eth_dev,
hash_types = vnic->hash_type;
rss_conf->rss_hf = 0;
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4) {
- rss_conf->rss_hf |= ETH_RSS_IPV4;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6) {
- rss_conf->rss_hf |= ETH_RSS_IPV6;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
}
if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6) {
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
hash_types &=
~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
}
@@ -2246,17 +2246,17 @@ static int bnxt_flow_ctrl_get_op(struct rte_eth_dev *dev,
fc_conf->autoneg = 1;
switch (bp->link_info->pause) {
case 0:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case (HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX |
HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX):
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
}
return 0;
@@ -2279,11 +2279,11 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
bp->link_info->auto_pause = 0;
bp->link_info->force_pause = 0;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
if (fc_conf->autoneg) {
bp->link_info->auto_pause =
HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_RX;
@@ -2294,7 +2294,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_RX;
}
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
if (fc_conf->autoneg) {
bp->link_info->auto_pause =
HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX;
@@ -2305,7 +2305,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_TX;
}
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
if (fc_conf->autoneg) {
bp->link_info->auto_pause =
HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX |
@@ -2336,7 +2336,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
return rc;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (bp->vxlan_port_cnt) {
PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
udp_tunnel->udp_port);
@@ -2351,7 +2351,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN;
bp->vxlan_port_cnt++;
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (bp->geneve_port_cnt) {
PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
udp_tunnel->udp_port);
@@ -2389,7 +2389,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
return rc;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (!bp->vxlan_port_cnt) {
PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
return -EINVAL;
@@ -2406,7 +2406,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_VXLAN;
port = bp->vxlan_fw_dst_port_id;
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (!bp->geneve_port_cnt) {
PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
return -EINVAL;
@@ -2584,7 +2584,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
int rc;
vnic = BNXT_GET_DEFAULT_VNIC(bp);
- if (!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)) {
+ if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
/* Remove any VLAN filters programmed */
for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
bnxt_del_vlan_filter(bp, i);
@@ -2604,7 +2604,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
bnxt_add_vlan_filter(bp, 0);
}
PMD_DRV_LOG(DEBUG, "VLAN Filtering: %d\n",
- !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER));
+ !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER));
return 0;
}
@@ -2617,7 +2617,7 @@ static int bnxt_free_one_vnic(struct bnxt *bp, uint16_t vnic_id)
/* Destroy vnic filters and vnic */
if (bp->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) {
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
bnxt_del_vlan_filter(bp, i);
}
@@ -2656,7 +2656,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
return rc;
if (bp->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) {
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
rc = bnxt_add_vlan_filter(bp, 0);
if (rc)
return rc;
@@ -2674,7 +2674,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
return rc;
PMD_DRV_LOG(DEBUG, "VLAN Strip Offload: %d\n",
- !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP));
+ !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP));
return rc;
}
@@ -2694,22 +2694,22 @@ bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask)
if (!dev->data->dev_started)
return 0;
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* Enable or disable VLAN filtering */
rc = bnxt_config_vlan_hw_filter(bp, rx_offloads);
if (rc)
return rc;
}
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
rc = bnxt_config_vlan_hw_stripping(bp, rx_offloads);
if (rc)
return rc;
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
PMD_DRV_LOG(DEBUG, "Extend VLAN supported\n");
else
PMD_DRV_LOG(INFO, "Extend VLAN unsupported\n");
@@ -2724,10 +2724,10 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
{
struct bnxt *bp = dev->data->dev_private;
int qinq = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_EXTEND;
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
- if (vlan_type != ETH_VLAN_TYPE_INNER &&
- vlan_type != ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+ vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
PMD_DRV_LOG(ERR,
"Unsupported vlan type.");
return -EINVAL;
@@ -2739,7 +2739,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
return -EINVAL;
}
- if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
switch (tpid) {
case RTE_ETHER_TYPE_QINQ:
bp->outer_tpid_bd =
@@ -2767,7 +2767,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
}
bp->outer_tpid_bd |= tpid;
PMD_DRV_LOG(INFO, "outer_tpid_bd = %x\n", bp->outer_tpid_bd);
- } else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+ } else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
PMD_DRV_LOG(ERR,
"Can accelerate only outer vlan in QinQ\n");
return -EINVAL;
@@ -2807,7 +2807,7 @@ bnxt_set_default_mac_addr_op(struct rte_eth_dev *dev,
bnxt_del_dflt_mac_filter(bp, vnic);
memcpy(bp->mac_addr, addr, RTE_ETHER_ADDR_LEN);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
/* This filter will allow only untagged packets */
rc = bnxt_add_vlan_filter(bp, 0);
} else {
@@ -3029,10 +3029,10 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
if (new_mtu > RTE_ETHER_MTU) {
bp->flags |= BNXT_FLAG_JUMBO;
bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
} else {
bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
bp->flags &= ~BNXT_FLAG_JUMBO;
}
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 59489b591a6f..98e1107f629c 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -974,7 +974,7 @@ static int bnxt_vnic_prep(struct bnxt *bp, struct bnxt_vnic_info *vnic,
}
}
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
vnic->vlan_strip = true;
else
vnic->vlan_strip = false;
@@ -1157,7 +1157,7 @@ bnxt_validate_and_parse_flow(struct rte_eth_dev *dev,
rxq = bp->rx_queues[act_q->index];
- if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) && rxq &&
+ if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) && rxq &&
vnic->fw_vnic_id != INVALID_HW_RING_ID)
goto use_vnic;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index f29d57423585..0d9dda0c362c 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -628,7 +628,7 @@ int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
uint16_t j = dst_id - 1;
//TODO: Is there a better way to add VLANs to each VNIC in case of VMDQ
- if ((dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) &&
+ if ((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) &&
conf->pool_map[j].pools & (1UL << j)) {
PMD_DRV_LOG(DEBUG,
"Add vlan %u to vmdq pool %u\n",
@@ -2955,12 +2955,12 @@ static uint16_t bnxt_parse_eth_link_duplex(uint32_t conf_link_speed)
{
uint8_t hw_link_duplex = HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
- if ((conf_link_speed & ETH_LINK_SPEED_FIXED) == ETH_LINK_SPEED_AUTONEG)
+ if ((conf_link_speed & RTE_ETH_LINK_SPEED_FIXED) == RTE_ETH_LINK_SPEED_AUTONEG)
return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
switch (conf_link_speed) {
- case ETH_LINK_SPEED_10M_HD:
- case ETH_LINK_SPEED_100M_HD:
+ case RTE_ETH_LINK_SPEED_10M_HD:
+ case RTE_ETH_LINK_SPEED_100M_HD:
/* FALLTHROUGH */
return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF;
}
@@ -2977,51 +2977,51 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
{
uint16_t eth_link_speed = 0;
- if (conf_link_speed == ETH_LINK_SPEED_AUTONEG)
- return ETH_LINK_SPEED_AUTONEG;
+ if (conf_link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
+ return RTE_ETH_LINK_SPEED_AUTONEG;
- switch (conf_link_speed & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_100M:
- case ETH_LINK_SPEED_100M_HD:
+ switch (conf_link_speed & ~RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_100M:
+ case RTE_ETH_LINK_SPEED_100M_HD:
/* FALLTHROUGH */
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_100MB;
break;
- case ETH_LINK_SPEED_1G:
+ case RTE_ETH_LINK_SPEED_1G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_1GB;
break;
- case ETH_LINK_SPEED_2_5G:
+ case RTE_ETH_LINK_SPEED_2_5G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_2_5GB;
break;
- case ETH_LINK_SPEED_10G:
+ case RTE_ETH_LINK_SPEED_10G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_10GB;
break;
- case ETH_LINK_SPEED_20G:
+ case RTE_ETH_LINK_SPEED_20G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_20GB;
break;
- case ETH_LINK_SPEED_25G:
+ case RTE_ETH_LINK_SPEED_25G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_25GB;
break;
- case ETH_LINK_SPEED_40G:
+ case RTE_ETH_LINK_SPEED_40G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_40GB;
break;
- case ETH_LINK_SPEED_50G:
+ case RTE_ETH_LINK_SPEED_50G:
eth_link_speed = pam4_link ?
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_50GB :
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_50GB;
break;
- case ETH_LINK_SPEED_100G:
+ case RTE_ETH_LINK_SPEED_100G:
eth_link_speed = pam4_link ?
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_100GB :
HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_100GB;
break;
- case ETH_LINK_SPEED_200G:
+ case RTE_ETH_LINK_SPEED_200G:
eth_link_speed =
HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
break;
@@ -3034,11 +3034,11 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
return eth_link_speed;
}
-#define BNXT_SUPPORTED_SPEEDS (ETH_LINK_SPEED_100M | ETH_LINK_SPEED_100M_HD | \
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G | \
- ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G | ETH_LINK_SPEED_25G | \
- ETH_LINK_SPEED_40G | ETH_LINK_SPEED_50G | \
- ETH_LINK_SPEED_100G | ETH_LINK_SPEED_200G)
+#define BNXT_SUPPORTED_SPEEDS (RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_100M_HD | \
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G | \
+ RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G | RTE_ETH_LINK_SPEED_25G | \
+ RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_50G | \
+ RTE_ETH_LINK_SPEED_100G | RTE_ETH_LINK_SPEED_200G)
static int bnxt_validate_link_speed(struct bnxt *bp)
{
@@ -3047,13 +3047,13 @@ static int bnxt_validate_link_speed(struct bnxt *bp)
uint32_t link_speed_capa;
uint32_t one_speed;
- if (link_speed == ETH_LINK_SPEED_AUTONEG)
+ if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
return 0;
link_speed_capa = bnxt_get_speed_capabilities(bp);
- if (link_speed & ETH_LINK_SPEED_FIXED) {
- one_speed = link_speed & ~ETH_LINK_SPEED_FIXED;
+ if (link_speed & RTE_ETH_LINK_SPEED_FIXED) {
+ one_speed = link_speed & ~RTE_ETH_LINK_SPEED_FIXED;
if (one_speed & (one_speed - 1)) {
PMD_DRV_LOG(ERR,
@@ -3083,71 +3083,71 @@ bnxt_parse_eth_link_speed_mask(struct bnxt *bp, uint32_t link_speed)
{
uint16_t ret = 0;
- if (link_speed == ETH_LINK_SPEED_AUTONEG) {
+ if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG) {
if (bp->link_info->support_speeds)
return bp->link_info->support_speeds;
link_speed = BNXT_SUPPORTED_SPEEDS;
}
- if (link_speed & ETH_LINK_SPEED_100M)
+ if (link_speed & RTE_ETH_LINK_SPEED_100M)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
- if (link_speed & ETH_LINK_SPEED_100M_HD)
+ if (link_speed & RTE_ETH_LINK_SPEED_100M_HD)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
- if (link_speed & ETH_LINK_SPEED_1G)
+ if (link_speed & RTE_ETH_LINK_SPEED_1G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_1GB;
- if (link_speed & ETH_LINK_SPEED_2_5G)
+ if (link_speed & RTE_ETH_LINK_SPEED_2_5G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_2_5GB;
- if (link_speed & ETH_LINK_SPEED_10G)
+ if (link_speed & RTE_ETH_LINK_SPEED_10G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_10GB;
- if (link_speed & ETH_LINK_SPEED_20G)
+ if (link_speed & RTE_ETH_LINK_SPEED_20G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_20GB;
- if (link_speed & ETH_LINK_SPEED_25G)
+ if (link_speed & RTE_ETH_LINK_SPEED_25G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_25GB;
- if (link_speed & ETH_LINK_SPEED_40G)
+ if (link_speed & RTE_ETH_LINK_SPEED_40G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_40GB;
- if (link_speed & ETH_LINK_SPEED_50G)
+ if (link_speed & RTE_ETH_LINK_SPEED_50G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_50GB;
- if (link_speed & ETH_LINK_SPEED_100G)
+ if (link_speed & RTE_ETH_LINK_SPEED_100G)
ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100GB;
- if (link_speed & ETH_LINK_SPEED_200G)
+ if (link_speed & RTE_ETH_LINK_SPEED_200G)
ret |= HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
return ret;
}
static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
{
- uint32_t eth_link_speed = ETH_SPEED_NUM_NONE;
+ uint32_t eth_link_speed = RTE_ETH_SPEED_NUM_NONE;
switch (hw_link_speed) {
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB:
- eth_link_speed = ETH_SPEED_NUM_100M;
+ eth_link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_1GB:
- eth_link_speed = ETH_SPEED_NUM_1G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2_5GB:
- eth_link_speed = ETH_SPEED_NUM_2_5G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_10GB:
- eth_link_speed = ETH_SPEED_NUM_10G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_20GB:
- eth_link_speed = ETH_SPEED_NUM_20G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_25GB:
- eth_link_speed = ETH_SPEED_NUM_25G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_40GB:
- eth_link_speed = ETH_SPEED_NUM_40G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_50GB:
- eth_link_speed = ETH_SPEED_NUM_50G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100GB:
- eth_link_speed = ETH_SPEED_NUM_100G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_100G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_200GB:
- eth_link_speed = ETH_SPEED_NUM_200G;
+ eth_link_speed = RTE_ETH_SPEED_NUM_200G;
break;
case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2GB:
default:
@@ -3160,16 +3160,16 @@ static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
static uint16_t bnxt_parse_hw_link_duplex(uint16_t hw_link_duplex)
{
- uint16_t eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+ uint16_t eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (hw_link_duplex) {
case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH:
case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_FULL:
/* FALLTHROUGH */
- eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+ eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF:
- eth_link_duplex = ETH_LINK_HALF_DUPLEX;
+ eth_link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
default:
PMD_DRV_LOG(ERR, "HWRM link duplex %d not defined\n",
@@ -3198,12 +3198,12 @@ int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link)
link->link_speed =
bnxt_parse_hw_link_speed(link_info->link_speed);
else
- link->link_speed = ETH_SPEED_NUM_NONE;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
link->link_duplex = bnxt_parse_hw_link_duplex(link_info->duplex);
link->link_status = link_info->link_up;
link->link_autoneg = link_info->auto_mode ==
HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE ?
- ETH_LINK_FIXED : ETH_LINK_AUTONEG;
+ RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
exit:
return rc;
}
@@ -3229,7 +3229,7 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up)
autoneg = bnxt_check_eth_link_autoneg(dev_conf->link_speeds);
if (BNXT_CHIP_P5(bp) &&
- dev_conf->link_speeds == ETH_LINK_SPEED_40G) {
+ dev_conf->link_speeds == RTE_ETH_LINK_SPEED_40G) {
/* 40G is not supported as part of media auto detect.
* The speed should be forced and autoneg disabled
* to configure 40G speed.
@@ -3320,7 +3320,7 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
HWRM_CHECK_RESULT();
- bp->vlan = rte_le_to_cpu_16(resp->vlan) & ETH_VLAN_ID_MAX;
+ bp->vlan = rte_le_to_cpu_16(resp->vlan) & RTE_ETH_VLAN_ID_MAX;
svif_info = rte_le_to_cpu_16(resp->svif_info);
if (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index bdbad53b7d7f..a9f5e13476b0 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -536,7 +536,7 @@ int bnxt_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
if (parent_bp->flags & BNXT_FLAG_PTP_SUPPORTED)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT;
dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index 957b175f1b89..632a611bf612 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -185,7 +185,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
int tpa_info_start = ag_bitmap_start + ag_bitmap_len;
int tpa_info_len = 0;
- if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if (rx_ring_info && (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
int tpa_max = BNXT_TPA_MAX_AGGS(bp);
tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info);
@@ -278,7 +278,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
ag_bitmap_start, ag_bitmap_len);
/* TPA info */
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
rx_ring_info->tpa_info =
((struct bnxt_tpa_info *)((char *)mz->addr +
tpa_info_start));
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index bbcb3b06e7df..0ac3a2b3b7d3 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -41,13 +41,13 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
bp->nr_vnics = 0;
/* Multi-queue mode */
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB_RSS) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
/* VMDq ONLY, VMDq+RSS, VMDq+DCB, VMDq+DCB+RSS */
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_RSS:
- case ETH_MQ_RX_VMDQ_ONLY:
- case ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
/* FALLTHROUGH */
/* ETH_8/64_POOLs */
pools = conf->nb_queue_pools;
@@ -55,14 +55,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
max_pools = RTE_MIN(bp->max_vnics,
RTE_MIN(bp->max_l2_ctx,
RTE_MIN(bp->max_rsscos_ctx,
- ETH_64_POOLS)));
+ RTE_ETH_64_POOLS)));
PMD_DRV_LOG(DEBUG,
"pools = %u max_pools = %u\n",
pools, max_pools);
if (pools > max_pools)
pools = max_pools;
break;
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
pools = bp->rx_cosq_cnt ? bp->rx_cosq_cnt : 1;
break;
default:
@@ -100,7 +100,7 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
ring_idx, rxq, i, vnic);
}
if (i == 0) {
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB) {
bp->eth_dev->data->promiscuous = 1;
vnic->flags |= BNXT_VNIC_INFO_PROMISC;
}
@@ -110,8 +110,8 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
vnic->end_grp_id = end_grp_id;
if (i) {
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB ||
- !(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS))
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB ||
+ !(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS))
vnic->rss_dflt_cr = true;
goto skip_filter_allocation;
}
@@ -136,14 +136,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
bp->rx_num_qs_per_vnic = nb_q_per_grp;
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
struct rte_eth_rss_conf *rss = &dev_conf->rx_adv_conf.rss_conf;
if (bp->flags & BNXT_FLAG_UPDATE_HASH)
bp->flags &= ~BNXT_FLAG_UPDATE_HASH;
for (i = 0; i < bp->nr_vnics; i++) {
- uint32_t lvl = ETH_RSS_LEVEL(rss->rss_hf);
+ uint32_t lvl = RTE_ETH_RSS_LEVEL(rss->rss_hf);
vnic = &bp->vnic_info[i];
vnic->hash_type =
@@ -338,7 +338,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
PMD_DRV_LOG(DEBUG, "RX Buf size is %d\n", rxq->rx_buf_size);
rxq->queue_id = queue_idx;
rxq->port_id = eth_dev->data->port_id;
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -454,7 +454,7 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
}
PMD_DRV_LOG(INFO, "Rx queue started %d\n", rx_queue_id);
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
vnic = rxq->vnic;
if (BNXT_HAS_RING_GRPS(bp)) {
@@ -525,7 +525,7 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
rxq->rx_started = false;
PMD_DRV_LOG(DEBUG, "Rx queue stopped\n");
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
if (BNXT_HAS_RING_GRPS(bp))
vnic->fw_grp_ids[rx_queue_id] = INVALID_HW_RING_ID;
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 73fbdd17d126..0909bab89b76 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -566,8 +566,8 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
dev_conf = &rxq->bp->eth_dev->data->dev_conf;
offloads = dev_conf->rxmode.offloads;
- outer_cksum_enabled = !!(offloads & (DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM));
+ outer_cksum_enabled = !!(offloads & (RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM));
/* Initialize ol_flags table. */
pt = rxr->ol_flags_table;
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
index d08854ff61e2..e4905b4fd169 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
@@ -416,7 +416,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_common.h b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
index 9b9489a695a2..0627fd212d0a 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_common.h
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
@@ -96,7 +96,7 @@ bnxt_rxq_rearm(struct bnxt_rx_queue *rxq, struct bnxt_rx_ring_info *rxr)
}
/*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
* is enabled.
*/
static inline void
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
index 13211060cf0e..f15e2d3b4ed4 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
@@ -352,7 +352,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
index 6e563053260a..ffd560166cac 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
@@ -333,7 +333,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 47824334ae3e..401dd83f4e7d 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -350,7 +350,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
}
/*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
* is enabled.
*/
static void bnxt_tx_cmp_fast(struct bnxt_tx_queue *txq, int nr_pkts)
@@ -476,7 +476,7 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq)
} while (nb_tx_pkts < ring_mask);
if (nb_tx_pkts) {
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
bnxt_tx_cmp_fast(txq, nb_tx_pkts);
else
bnxt_tx_cmp(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index 26253a7e17f2..c63cf4b943fa 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -239,17 +239,17 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
{
uint16_t hwrm_type = 0;
- if (rte_type & ETH_RSS_IPV4)
+ if (rte_type & RTE_ETH_RSS_IPV4)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
- if (rte_type & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
- if (rte_type & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
- if (rte_type & ETH_RSS_IPV6)
+ if (rte_type & RTE_ETH_RSS_IPV6)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
- if (rte_type & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
- if (rte_type & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
return hwrm_type;
@@ -258,11 +258,11 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
int bnxt_rte_to_hwrm_hash_level(struct bnxt *bp, uint64_t hash_f, uint32_t lvl)
{
uint32_t mode = HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_DEFAULT;
- bool l3 = (hash_f & (ETH_RSS_IPV4 | ETH_RSS_IPV6));
- bool l4 = (hash_f & (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP));
+ bool l3 = (hash_f & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6));
+ bool l4 = (hash_f & (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP));
bool l3_only = l3 && !l4;
bool l3_and_l4 = l3 && l4;
@@ -307,16 +307,16 @@ uint64_t bnxt_hwrm_to_rte_rss_level(struct bnxt *bp, uint32_t mode)
* return default hash mode.
*/
if (!(bp->vnic_cap_flags & BNXT_VNIC_CAP_OUTER_RSS))
- return ETH_RSS_LEVEL_PMD_DEFAULT;
+ return RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_2 ||
mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_4)
- rss_level |= ETH_RSS_LEVEL_OUTERMOST;
+ rss_level |= RTE_ETH_RSS_LEVEL_OUTERMOST;
else if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_2 ||
mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_4)
- rss_level |= ETH_RSS_LEVEL_INNERMOST;
+ rss_level |= RTE_ETH_RSS_LEVEL_INNERMOST;
else
- rss_level |= ETH_RSS_LEVEL_PMD_DEFAULT;
+ rss_level |= RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
return rss_level;
}
diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c
index f71543810970..77ecbef04c3d 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt.c
+++ b/drivers/net/bnxt/rte_pmd_bnxt.c
@@ -421,18 +421,18 @@ int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf,
if (vf >= bp->pdev->max_vfs)
return -EINVAL;
- if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG) {
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG) {
PMD_DRV_LOG(ERR, "Currently cannot toggle this setting\n");
return -ENOTSUP;
}
/* Is this really the correct mapping? VFd seems to think it is. */
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
flag |= BNXT_VNIC_INFO_PROMISC;
- if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
flag |= BNXT_VNIC_INFO_BCAST;
- if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
flag |= BNXT_VNIC_INFO_ALLMULTI | BNXT_VNIC_INFO_MCAST;
if (on)
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index fc179a2732ac..fcf878b9b858 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -167,7 +167,7 @@ struct bond_dev_private {
struct rte_eth_desc_lim tx_desc_lim; /**< Tx descriptor limits */
uint16_t reta_size;
- struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_512 /
+ struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_512 /
RTE_RETA_GROUP_SIZE];
uint8_t rss_key[52]; /**< 52-byte hash key buffer. */
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 128754f4595a..20adfcf0ea9c 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -770,25 +770,25 @@ link_speed_key(uint16_t speed) {
uint16_t key_speed;
switch (speed) {
- case ETH_SPEED_NUM_NONE:
+ case RTE_ETH_SPEED_NUM_NONE:
key_speed = 0x00;
break;
- case ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_10M:
key_speed = BOND_LINK_SPEED_KEY_10M;
break;
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
key_speed = BOND_LINK_SPEED_KEY_100M;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
key_speed = BOND_LINK_SPEED_KEY_1000M;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
key_speed = BOND_LINK_SPEED_KEY_10G;
break;
- case ETH_SPEED_NUM_20G:
+ case RTE_ETH_SPEED_NUM_20G:
key_speed = BOND_LINK_SPEED_KEY_20G;
break;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
key_speed = BOND_LINK_SPEED_KEY_40G;
break;
default:
@@ -866,7 +866,7 @@ bond_mode_8023ad_periodic_cb(void *arg)
if (ret >= 0 && link_info.link_status != 0) {
key = link_speed_key(link_info.link_speed) << 1;
- if (link_info.link_duplex == ETH_LINK_FULL_DUPLEX)
+ if (link_info.link_duplex == RTE_ETH_LINK_FULL_DUPLEX)
key |= BOND_LINK_FULL_DUPLEX_KEY;
} else {
key = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index eb8d15d16034..f12060bcafb0 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -204,7 +204,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
if ((bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) == 0)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) == 0)
return 0;
internals = bonded_eth_dev->data->dev_private;
@@ -586,7 +586,7 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
return -1;
}
- if (link_props.link_status == ETH_LINK_UP) {
+ if (link_props.link_status == RTE_ETH_LINK_UP) {
if (internals->active_slave_count == 0 &&
!internals->user_defined_primary_port)
bond_ethdev_primary_set(internals,
@@ -721,7 +721,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
internals->tx_offload_capa = 0;
internals->rx_queue_offload_capa = 0;
internals->tx_queue_offload_capa = 0;
- internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+ internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
internals->reta_size = 0;
internals->candidate_max_rx_pktlen = 0;
internals->max_rx_pktlen = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index a6755661c49c..2482bb1cbc02 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1373,8 +1373,8 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
* In any other mode the link properties are set to default
* values of AUTONEG/DUPLEX
*/
- ethdev->data->dev_link.link_autoneg = ETH_LINK_AUTONEG;
- ethdev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ ethdev->data->dev_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
+ ethdev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
}
}
@@ -1704,7 +1704,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
/* If RSS is enabled for bonding, try to enable it for slaves */
- if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
if (internals->rss_key_len != 0) {
slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
internals->rss_key_len;
@@ -1721,23 +1721,23 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
}
if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
slave_eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
else
slave_eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
bonded_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
slave_eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
slave_eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
nb_tx_queues = bonded_eth_dev->data->nb_tx_queues;
@@ -1838,7 +1838,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
}
/* If RSS is enabled for bonding, synchronize RETA */
- if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+ if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
int i;
struct bond_dev_private *internals;
@@ -1961,7 +1961,7 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
return -1;
}
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 1;
internals = eth_dev->data->dev_private;
@@ -2101,7 +2101,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
tlb_last_obytets[internals->active_slaves[i]] = 0;
}
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 0;
internals->link_status_polling_enabled = 0;
@@ -2423,15 +2423,15 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
bond_ctx = ethdev->data->dev_private;
- ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
if (ethdev->data->dev_started == 0 ||
bond_ctx->active_slave_count == 0) {
- ethdev->data->dev_link.link_status = ETH_LINK_DOWN;
+ ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
- ethdev->data->dev_link.link_status = ETH_LINK_UP;
+ ethdev->data->dev_link.link_status = RTE_ETH_LINK_UP;
if (wait_to_complete)
link_update = rte_eth_link_get;
@@ -2456,7 +2456,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
&slave_link);
if (ret < 0) {
ethdev->data->dev_link.link_speed =
- ETH_SPEED_NUM_NONE;
+ RTE_ETH_SPEED_NUM_NONE;
RTE_BOND_LOG(ERR,
"Slave (port %u) link get failed: %s",
bond_ctx->active_slaves[idx],
@@ -2498,7 +2498,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
* In theses mode the maximum theoretical link speed is the sum
* of all the slaves
*/
- ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
one_link_update_succeeded = false;
for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
@@ -2872,7 +2872,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
goto link_update;
/* check link state properties if bonded link is up*/
- if (bonded_eth_dev->data->dev_link.link_status == ETH_LINK_UP) {
+ if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
if (link_properties_valid(bonded_eth_dev, &link) != 0)
RTE_BOND_LOG(ERR, "Invalid link properties "
"for slave %d in bonding mode %d",
@@ -2888,7 +2888,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
if (internals->active_slave_count < 1) {
/* If first active slave, then change link status */
bonded_eth_dev->data->dev_link.link_status =
- ETH_LINK_UP;
+ RTE_ETH_LINK_UP;
internals->current_primary_port = port_id;
lsc_flag = 1;
@@ -3279,7 +3279,7 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
internals->max_rx_pktlen = 0;
/* Initially allow to choose any offload type */
- internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+ internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
memset(&internals->default_rxconf, 0,
sizeof(internals->default_rxconf));
@@ -3508,7 +3508,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
* set key to the the value specified in port RSS configuration.
* Fall back to default RSS key if the key is not specified
*/
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
if (dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key != NULL) {
internals->rss_key_len =
dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len;
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 7caec6cf14c8..9a09748673b2 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -15,22 +15,22 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct rte_eth_rxmode *rxmode = &conf->rxmode;
uint16_t flags = 0;
- if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
- (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+ (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
flags |= NIX_RX_OFFLOAD_RSS_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
if (!dev->ptype_disable)
@@ -69,36 +69,36 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
- if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
- conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+ conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
- if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
- conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
- if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= NIX_TX_MULTI_SEG_F;
/* Enable Inner checksum for TSO */
- if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+ if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
/* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_TX_OFFLOAD_TSTAMP_F;
return flags;
diff --git a/drivers/net/cnxk/cn10k_rx.c b/drivers/net/cnxk/cn10k_rx.c
index 69e767ac3dd6..e3b1bd8ad225 100644
--- a/drivers/net/cnxk/cn10k_rx.c
+++ b/drivers/net/cnxk/cn10k_rx.c
@@ -76,12 +76,12 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
nix_eth_rx_burst_mseg[0][0][0][0][0][0];
if (dev->scalar_ena) {
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_burst);
}
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
}
diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c
index 0e1276c60ba2..f63b8fabefd4 100644
--- a/drivers/net/cnxk/cn10k_tx.c
+++ b/drivers/net/cnxk/cn10k_tx.c
@@ -77,11 +77,11 @@ cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
if (dev->scalar_ena) {
pick_tx_func(eth_dev, nix_eth_tx_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
} else {
pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
}
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 115e678916bb..9ff2d3dc114a 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -15,22 +15,22 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct rte_eth_rxmode *rxmode = &conf->rxmode;
uint16_t flags = 0;
- if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
- (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+ (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
flags |= NIX_RX_OFFLOAD_RSS_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
if (!dev->ptype_disable)
@@ -69,36 +69,36 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
- if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
- conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+ conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
- if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
- conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
- if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= NIX_TX_MULTI_SEG_F;
/* Enable Inner checksum for TSO */
- if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+ if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
/* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_TX_OFFLOAD_TSTAMP_F;
return flags;
@@ -277,9 +277,9 @@ cn9k_nix_configure(struct rte_eth_dev *eth_dev)
/* Platform specific checks */
if ((roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) &&
- (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
- ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
- (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+ ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
plt_err("Outer IP and SCTP checksum unsupported");
return -EINVAL;
}
@@ -530,17 +530,17 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
* TSO not supported for earlier chip revisions
*/
if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0())
- dev->tx_offload_capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO);
+ dev->tx_offload_capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
/* 50G and 100G to be supported for board version C0
* and above of CN9K.
*/
if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) {
- dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_50G;
- dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_100G;
+ dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_50G;
+ dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_100G;
}
dev->hwcap = 0;
diff --git a/drivers/net/cnxk/cn9k_rx.c b/drivers/net/cnxk/cn9k_rx.c
index 7d9f1bd61f79..08ee28658bce 100644
--- a/drivers/net/cnxk/cn9k_rx.c
+++ b/drivers/net/cnxk/cn9k_rx.c
@@ -76,12 +76,12 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
nix_eth_rx_burst_mseg[0][0][0][0][0][0];
if (dev->scalar_ena) {
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_burst);
}
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
}
diff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c
index 763f9a14fd79..f35ae8e70438 100644
--- a/drivers/net/cnxk/cn9k_tx.c
+++ b/drivers/net/cnxk/cn9k_tx.c
@@ -76,11 +76,11 @@ cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
if (dev->scalar_ena) {
pick_tx_func(eth_dev, nix_eth_tx_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
} else {
pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
}
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 0e3652ed5109..f6b75645bb69 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -10,7 +10,7 @@ nix_get_rx_offload_capa(struct cnxk_eth_dev *dev)
if (roc_nix_is_vf_or_sdp(&dev->nix) ||
dev->npc.switch_header_type == ROC_PRIV_FLAGS_HIGIG)
- capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
return capa;
}
@@ -28,11 +28,11 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev)
uint32_t speed_capa;
/* Auto negotiation disabled */
- speed_capa = ETH_LINK_SPEED_FIXED;
+ speed_capa = RTE_ETH_LINK_SPEED_FIXED;
if (!roc_nix_is_vf_or_sdp(&dev->nix) && !roc_nix_is_lbk(&dev->nix)) {
- speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
}
return speed_capa;
@@ -54,8 +54,8 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
- dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
- dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+ dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
}
}
@@ -90,7 +90,7 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
struct rte_eth_fc_conf fc_conf = {0};
int rc;
- /* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+ /* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
* by AF driver, update those info in PMD structure.
*/
rc = cnxk_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -98,10 +98,10 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
goto exit;
fc->mode = fc_conf.mode;
- fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_RX_PAUSE);
- fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_TX_PAUSE);
+ fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+ fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
exit:
return rc;
@@ -122,11 +122,11 @@ nix_update_flow_ctrl_config(struct rte_eth_dev *eth_dev)
/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
if (roc_model_is_cn96_ax() &&
dev->npc.switch_header_type != ROC_PRIV_FLAGS_HIGIG &&
- (fc_cfg.mode == RTE_FC_FULL || fc_cfg.mode == RTE_FC_RX_PAUSE)) {
+ (fc_cfg.mode == RTE_ETH_FC_FULL || fc_cfg.mode == RTE_ETH_FC_RX_PAUSE)) {
fc_cfg.mode =
- (fc_cfg.mode == RTE_FC_FULL ||
- fc_cfg.mode == RTE_FC_TX_PAUSE) ?
- RTE_FC_TX_PAUSE : RTE_FC_NONE;
+ (fc_cfg.mode == RTE_ETH_FC_FULL ||
+ fc_cfg.mode == RTE_ETH_FC_TX_PAUSE) ?
+ RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
}
return cnxk_nix_flow_ctrl_set(eth_dev, &fc_cfg);
@@ -169,7 +169,7 @@ nix_sq_max_sqe_sz(struct cnxk_eth_dev *dev)
* Maximum three segments can be supported with W8, Choose
* NIX_MAXSQESZ_W16 for multi segment offload.
*/
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
return NIX_MAXSQESZ_W16;
else
return NIX_MAXSQESZ_W8;
@@ -361,7 +361,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
* These are needed in deriving raw clock value from tsc counter.
* read_clock eth op returns raw clock value.
*/
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
rc = cnxk_nix_tsc_convert(dev);
if (rc) {
plt_err("Failed to calculate delta and freq mult");
@@ -434,24 +434,24 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
dev->ethdev_rss_hf = ethdev_rss;
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
dev->npc.switch_header_type == ROC_PRIV_FLAGS_LEN_90B) {
flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
}
- if (ethdev_rss & ETH_RSS_C_VLAN)
+ if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
- if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
- if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
- if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
- if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -460,34 +460,34 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
if (ethdev_rss & RSS_IPV6_ENABLE)
flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
- if (ethdev_rss & ETH_RSS_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_TCP)
flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
- if (ethdev_rss & ETH_RSS_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_UDP)
flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
- if (ethdev_rss & ETH_RSS_SCTP)
+ if (ethdev_rss & RTE_ETH_RSS_SCTP)
flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
if (ethdev_rss & RSS_IPV6_EX_ENABLE)
flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
- if (ethdev_rss & ETH_RSS_PORT)
+ if (ethdev_rss & RTE_ETH_RSS_PORT)
flowkey_cfg |= FLOW_KEY_TYPE_PORT;
- if (ethdev_rss & ETH_RSS_NVGRE)
+ if (ethdev_rss & RTE_ETH_RSS_NVGRE)
flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
- if (ethdev_rss & ETH_RSS_VXLAN)
+ if (ethdev_rss & RTE_ETH_RSS_VXLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
- if (ethdev_rss & ETH_RSS_GENEVE)
+ if (ethdev_rss & RTE_ETH_RSS_GENEVE)
flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
- if (ethdev_rss & ETH_RSS_GTPU)
+ if (ethdev_rss & RTE_ETH_RSS_GTPU)
flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
return flowkey_cfg;
@@ -513,7 +513,7 @@ nix_rss_default_setup(struct cnxk_eth_dev *dev)
uint64_t rss_hf;
rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
@@ -729,8 +729,8 @@ nix_lso_fmt_setup(struct cnxk_eth_dev *dev)
/* Nothing much to do if offload is not enabled */
if (!(dev->tx_offloads &
- (DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO)))
+ (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))
return 0;
/* Setup LSO formats in AF. Its a no-op if other ethdev has
@@ -778,13 +778,13 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
goto fail_configure;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
plt_err("Unsupported mq rx mode %d", rxmode->mq_mode);
goto fail_configure;
}
- if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
plt_err("Unsupported mq tx mode %d", txmode->mq_mode);
goto fail_configure;
}
@@ -814,7 +814,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
/* Prepare rx cfg */
rx_cfg = ROC_NIX_LF_RX_CFG_DIS_APAD;
if (dev->rx_offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_OL4;
rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_IL4;
}
@@ -1191,12 +1191,12 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
* enabled on PF owning this VF
*/
memset(&dev->tstamp, 0, sizeof(struct cnxk_timesync_info));
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
cnxk_eth_dev_ops.timesync_enable(eth_dev);
else
cnxk_eth_dev_ops.timesync_disable(eth_dev);
- if (dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
rc = rte_mbuf_dyn_rx_timestamp_register
(&dev->tstamp.tstamp_dynfield_offset,
&dev->tstamp.rx_tstamp_dynflag);
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 2528b3cdaa0c..53a657f8865d 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -54,41 +54,44 @@
CNXK_NIX_TX_NB_SEG_MAX)
#define CNXK_NIX_RSS_L3_L4_SRC_DST \
- (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY)
+ (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
#define CNXK_NIX_RSS_OFFLOAD \
- (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP | \
- ETH_RSS_SCTP | ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD | \
- CNXK_NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | ETH_RSS_C_VLAN)
+ (RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | \
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_TUNNEL | \
+ RTE_ETH_RSS_L2_PAYLOAD | CNXK_NIX_RSS_L3_L4_SRC_DST | \
+ RTE_ETH_RSS_LEVEL_MASK | RTE_ETH_RSS_C_VLAN)
#define CNXK_NIX_TX_OFFLOAD_CAPA \
- (DEV_TX_OFFLOAD_MBUF_FAST_FREE | DEV_TX_OFFLOAD_MT_LOCKFREE | \
- DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_IPV4_CKSUM)
+ (RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
#define CNXK_NIX_RX_OFFLOAD_CAPA \
- (DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP | \
- DEV_RX_OFFLOAD_VLAN_STRIP)
+ (RTE_ETH_RX_OFFLOAD_CHECKSUM | RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_TIMESTAMP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
#define RSS_IPV4_ENABLE \
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_SCTP)
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
#define RSS_IPV6_ENABLE \
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_SCTP)
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
#define RSS_IPV6_EX_ENABLE \
- (ETH_RSS_IPV6_EX | ETH_RSS_IPV6_TCP_EX | ETH_RSS_IPV6_UDP_EX)
+ (RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX)
#define RSS_MAX_LEVELS 3
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index 37720fb0954e..bf0c6d6b4ad8 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -49,11 +49,11 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
val = atoi(value);
- if (val <= ETH_RSS_RETA_SIZE_64)
+ if (val <= RTE_ETH_RSS_RETA_SIZE_64)
val = ROC_NIX_RSS_RETA_SZ_64;
- else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
+ else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
val = ROC_NIX_RSS_RETA_SZ_128;
- else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
+ else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
val = ROC_NIX_RSS_RETA_SZ_256;
else
val = ROC_NIX_RSS_RETA_SZ_64;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index b6cc5286c6d0..fa6b8aa4f0c5 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -81,25 +81,25 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
- {DEV_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
- {DEV_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
- {DEV_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
- {DEV_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
- {DEV_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
- {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
- {DEV_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
- {DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
- {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
- {DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
- {DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
- {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
- {DEV_RX_OFFLOAD_SECURITY, " Security,"},
- {DEV_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
- {DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
- {DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
- {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+ {RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
+ {RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
+ {RTE_ETH_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
+ {RTE_ETH_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
+ {RTE_ETH_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
+ {RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+ {RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+ {RTE_ETH_RX_OFFLOAD_SECURITY, " Security,"},
+ {RTE_ETH_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
+ {RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
};
static const char *const burst_mode[] = {"Vector Neon, Rx Offloads:",
"Scalar, Rx Offloads:"
@@ -143,28 +143,28 @@ cnxk_nix_tx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
uint64_t flags;
const char *output;
} tx_offload_map[] = {
- {DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
- {DEV_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
- {DEV_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
- {DEV_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
- {DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
- {DEV_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
- {DEV_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
- {DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
- {DEV_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
- {DEV_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
- {DEV_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
- {DEV_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
- {DEV_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
- {DEV_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
- {DEV_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
- {DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
- {DEV_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
- {DEV_TX_OFFLOAD_SECURITY, " Security,"},
- {DEV_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
- {DEV_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
- {DEV_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
- {DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
+ {RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+ {RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
+ {RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
+ {RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
+ {RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
+ {RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
+ {RTE_ETH_TX_OFFLOAD_SECURITY, " Security,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+ {RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
};
static const char *const burst_mode[] = {"Vector Neon, Tx Offloads:",
"Scalar, Tx Offloads:"
@@ -204,8 +204,8 @@ cnxk_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
{
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
enum rte_eth_fc_mode mode_map[] = {
- RTE_FC_NONE, RTE_FC_RX_PAUSE,
- RTE_FC_TX_PAUSE, RTE_FC_FULL
+ RTE_ETH_FC_NONE, RTE_ETH_FC_RX_PAUSE,
+ RTE_ETH_FC_TX_PAUSE, RTE_ETH_FC_FULL
};
struct roc_nix *nix = &dev->nix;
int mode;
@@ -265,10 +265,10 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
if (fc_conf->mode == fc->mode)
return 0;
- rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_TX_PAUSE);
+ rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
/* Check if TX pause frame is already enabled or not */
if (fc->tx_pause ^ tx_pause) {
@@ -409,13 +409,13 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
* when this feature has not been enabled before.
*/
if (data->dev_started && frame_size > buffsz &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
plt_err("Scatter offload is not enabled for mtu");
goto exit;
}
/* Check <seg size> * <max_seg> >= max_frame */
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
frame_size > (buffsz * CNXK_NIX_RX_NB_SEG_MAX)) {
plt_err("Greater than maximum supported packet length");
goto exit;
@@ -443,9 +443,9 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
frame_size += RTE_ETHER_CRC_LEN;
if (frame_size > RTE_ETHER_MAX_LEN)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
/* Update max_rx_pkt_len */
data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -816,7 +816,7 @@ cnxk_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
if (rss_conf->rss_key)
roc_nix_rss_key_set(nix, rss_conf->rss_key);
- rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
flowkey_cfg =
diff --git a/drivers/net/cnxk/cnxk_link.c b/drivers/net/cnxk/cnxk_link.c
index 3fdbdba49549..1cff8d56e65b 100644
--- a/drivers/net/cnxk/cnxk_link.c
+++ b/drivers/net/cnxk/cnxk_link.c
@@ -38,7 +38,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
plt_info("Port %d: Link Up - speed %u Mbps - %s",
(int)(eth_dev->data->port_id),
(uint32_t)link->link_speed,
- link->link_duplex == ETH_LINK_FULL_DUPLEX
+ link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX
? "full-duplex"
: "half-duplex");
else
@@ -66,7 +66,7 @@ cnxk_eth_dev_link_status_cb(struct roc_nix *nix, struct roc_nix_link_info *link)
eth_link.link_status = link->status;
eth_link.link_speed = link->speed;
- eth_link.link_autoneg = ETH_LINK_AUTONEG;
+ eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
eth_link.link_duplex = link->full_duplex;
/* Print link info */
@@ -94,17 +94,17 @@ cnxk_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
return 0;
if (roc_nix_is_lbk(&dev->nix)) {
- link.link_status = ETH_LINK_UP;
- link.link_speed = ETH_SPEED_NUM_100G;
- link.link_autoneg = ETH_LINK_FIXED;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
} else {
rc = roc_nix_mac_link_info_get(&dev->nix, &info);
if (rc)
return rc;
link.link_status = info.status;
link.link_speed = info.speed;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
if (info.full_duplex)
link.link_duplex = info.full_duplex;
}
diff --git a/drivers/net/cnxk/cnxk_ptp.c b/drivers/net/cnxk/cnxk_ptp.c
index 449489f599c4..139fea256ccd 100644
--- a/drivers/net/cnxk/cnxk_ptp.c
+++ b/drivers/net/cnxk/cnxk_ptp.c
@@ -227,7 +227,7 @@ cnxk_nix_timesync_enable(struct rte_eth_dev *eth_dev)
dev->rx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
dev->tx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
- dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
rc = roc_nix_ptp_rx_ena_dis(nix, true);
if (!rc) {
@@ -257,7 +257,7 @@ int
cnxk_nix_timesync_disable(struct rte_eth_dev *eth_dev)
{
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
- uint64_t rx_offloads = DEV_RX_OFFLOAD_TIMESTAMP;
+ uint64_t rx_offloads = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
struct roc_nix *nix = &dev->nix;
int rc = 0;
diff --git a/drivers/net/cnxk/cnxk_rte_flow.c b/drivers/net/cnxk/cnxk_rte_flow.c
index 32c1b5dee5fa..ecdfee7b11a6 100644
--- a/drivers/net/cnxk/cnxk_rte_flow.c
+++ b/drivers/net/cnxk/cnxk_rte_flow.c
@@ -69,7 +69,7 @@ npc_rss_action_validate(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
plt_err("multi-queue mode is disabled");
return -ENOTSUP;
}
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 7c89a028bf16..dee618a0db5f 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -28,32 +28,32 @@
#define CXGBE_LINK_STATUS_POLL_CNT 100 /* Max number of times to poll */
#define CXGBE_DEFAULT_RSS_KEY_LEN 40 /* 320-bits */
-#define CXGBE_RSS_HF_IPV4_MASK (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_OTHER)
-#define CXGBE_RSS_HF_IPV6_MASK (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_IPV6_EX)
-#define CXGBE_RSS_HF_TCP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_IPV6_TCP_EX)
-#define CXGBE_RSS_HF_UDP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_UDP_EX)
-#define CXGBE_RSS_HF_ALL (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP)
+#define CXGBE_RSS_HF_IPV4_MASK (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
+#define CXGBE_RSS_HF_IPV6_MASK (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_IPV6_EX)
+#define CXGBE_RSS_HF_TCP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_IPV6_TCP_EX)
+#define CXGBE_RSS_HF_UDP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
+#define CXGBE_RSS_HF_ALL (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP)
/* Tx/Rx Offloads supported */
-#define CXGBE_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define CXGBE_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+#define CXGBE_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define CXGBE_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
/* Devargs filtermode and filtermask representation */
enum cxgbe_devargs_filter_mode_flags {
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 177eca397600..4b5ab6f62971 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -231,9 +231,9 @@ int cxgbe_dev_link_update(struct rte_eth_dev *eth_dev,
}
new_link.link_status = cxgbe_force_linkup(adapter) ?
- ETH_LINK_UP : pi->link_cfg.link_ok;
+ RTE_ETH_LINK_UP : pi->link_cfg.link_ok;
new_link.link_autoneg = (lc->link_caps & FW_PORT_CAP32_ANEG) ? 1 : 0;
- new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
new_link.link_speed = t4_fwcap_to_speed(lc->link_caps);
return rte_eth_linkstatus_set(eth_dev, &new_link);
@@ -316,10 +316,10 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
/* set to jumbo mode if needed */
if (new_mtu > CXGBE_ETH_MAX_LEN)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
@@ -396,7 +396,7 @@ int cxgbe_dev_start(struct rte_eth_dev *eth_dev)
goto out;
}
- if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
eth_dev->data->scattered_rx = 1;
else
eth_dev->data->scattered_rx = 0;
@@ -460,9 +460,9 @@ int cxgbe_dev_configure(struct rte_eth_dev *eth_dev)
CXGBE_FUNC_TRACE();
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (!(adapter->flags & FW_QUEUE_BOUND)) {
err = cxgbe_setup_sge_fwevtq(adapter);
@@ -685,10 +685,10 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
/* Set to jumbo mode if necessary */
if (pkt_len > CXGBE_ETH_MAX_LEN)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
err = t4_sge_alloc_rxq(adapter, &rxq->rspq, false, eth_dev, msi_idx,
&rxq->fl, NULL,
@@ -1079,13 +1079,13 @@ static int cxgbe_flow_ctrl_get(struct rte_eth_dev *eth_dev,
rx_pause = 1;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -1098,12 +1098,12 @@ static int cxgbe_flow_ctrl_set(struct rte_eth_dev *eth_dev,
u8 tx_pause = 0, rx_pause = 0;
int ret;
- if (fc_conf->mode == RTE_FC_FULL) {
+ if (fc_conf->mode == RTE_ETH_FC_FULL) {
tx_pause = 1;
rx_pause = 1;
- } else if (fc_conf->mode == RTE_FC_TX_PAUSE) {
+ } else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE) {
tx_pause = 1;
- } else if (fc_conf->mode == RTE_FC_RX_PAUSE) {
+ } else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE) {
rx_pause = 1;
}
@@ -1199,9 +1199,9 @@ static int cxgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
rss_hf |= CXGBE_RSS_HF_IPV6_MASK;
if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN) {
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (flags & F_FW_RSS_VI_CONFIG_CMD_UDPEN)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
}
if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN)
@@ -1478,7 +1478,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
if (lc->pcaps & FW_PORT_CAP32_SPEED_100G) {
if (capa_arr) {
- capa_arr[num].speed = ETH_SPEED_NUM_100G;
+ capa_arr[num].speed = RTE_ETH_SPEED_NUM_100G;
capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS);
}
@@ -1487,7 +1487,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
if (lc->pcaps & FW_PORT_CAP32_SPEED_50G) {
if (capa_arr) {
- capa_arr[num].speed = ETH_SPEED_NUM_50G;
+ capa_arr[num].speed = RTE_ETH_SPEED_NUM_50G;
capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER);
}
@@ -1496,7 +1496,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
if (lc->pcaps & FW_PORT_CAP32_SPEED_25G) {
if (capa_arr) {
- capa_arr[num].speed = ETH_SPEED_NUM_25G;
+ capa_arr[num].speed = RTE_ETH_SPEED_NUM_25G;
capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS);
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 6dd1bf1f836e..54723edc2144 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1671,7 +1671,7 @@ int cxgbe_link_start(struct port_info *pi)
* that step explicitly.
*/
ret = t4_set_rxmode(adapter, adapter->mbox, pi->viid, mtu, -1, -1, -1,
- !!(conf_offloads & DEV_RX_OFFLOAD_VLAN_STRIP),
+ !!(conf_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP),
true);
if (ret == 0) {
ret = cxgbe_mpstcam_modify(pi, (int)pi->xact_addr_filt,
@@ -1695,7 +1695,7 @@ int cxgbe_link_start(struct port_info *pi)
}
if (ret == 0 && cxgbe_force_linkup(adapter))
- pi->eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ pi->eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return ret;
}
@@ -1726,10 +1726,10 @@ int cxgbe_write_rss_conf(const struct port_info *pi, uint64_t rss_hf)
if (rss_hf & CXGBE_RSS_HF_IPV4_MASK)
flags |= F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN |
F_FW_RSS_VI_CONFIG_CMD_UDPEN;
@@ -1866,7 +1866,7 @@ static void fw_caps_to_speed_caps(enum fw_port_type port_type,
{
#define SET_SPEED(__speed_name) \
do { \
- *speed_caps |= ETH_LINK_ ## __speed_name; \
+ *speed_caps |= RTE_ETH_LINK_ ## __speed_name; \
} while (0)
#define FW_CAPS_TO_SPEED(__fw_name) \
@@ -1953,7 +1953,7 @@ void cxgbe_get_speed_caps(struct port_info *pi, u32 *speed_caps)
speed_caps);
if (!(pi->link_cfg.pcaps & FW_PORT_CAP32_ANEG))
- *speed_caps |= ETH_LINK_SPEED_FIXED;
+ *speed_caps |= RTE_ETH_LINK_SPEED_FIXED;
}
/**
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index e5f7721dc4b3..eddb818c4861 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -366,7 +366,7 @@ static unsigned int refill_fl_usembufs(struct adapter *adap, struct sge_fl *q,
int ret, i;
struct rte_pktmbuf_pool_private *mbp_priv;
u8 jumbo_en = rxq->rspq.eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
/* Use jumbo mtu buffers if mbuf data room size can fit jumbo data. */
mbp_priv = rte_mempool_get_priv(rxq->rspq.mb_pool);
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 27d670f843d2..c466256137a3 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,30 +54,30 @@
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_SCATTER;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
/* Rx offloads which cannot be disabled */
static uint64_t dev_rx_offloads_nodis =
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Supported Tx offloads */
static uint64_t dev_tx_offloads_sup =
- DEV_TX_OFFLOAD_MT_LOCKFREE |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Tx offloads which cannot be disabled */
static uint64_t dev_tx_offloads_nodis =
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Keep track of whether QMAN and BMAN have been globally initialized */
static int is_global_init;
@@ -189,10 +189,10 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (frame_size > DPAA_ETH_MAX_LEN)
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -238,7 +238,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
uint32_t max_len;
DPAA_PMD_DEBUG("enabling jumbo");
@@ -259,7 +259,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE;
}
- if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
DPAA_PMD_DEBUG("enabling scatter mode");
fman_if_set_sg(dev->process_private, 1);
dev->data->scattered_rx = 1;
@@ -304,43 +304,43 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
/* Configure link only if link is UP*/
if (link->link_status) {
- if (eth_conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (eth_conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
/* Start autoneg only if link is not in autoneg mode */
if (!link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
- } else if (eth_conf->link_speeds & ETH_LINK_SPEED_FIXED) {
- switch (eth_conf->link_speeds & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_10M_HD:
- speed = ETH_SPEED_NUM_10M;
- duplex = ETH_LINK_HALF_DUPLEX;
+ } else if (eth_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+ switch (eth_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_10M_HD:
+ speed = RTE_ETH_SPEED_NUM_10M;
+ duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
- case ETH_LINK_SPEED_10M:
- speed = ETH_SPEED_NUM_10M;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_10M:
+ speed = RTE_ETH_SPEED_NUM_10M;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_100M_HD:
- speed = ETH_SPEED_NUM_100M;
- duplex = ETH_LINK_HALF_DUPLEX;
+ case RTE_ETH_LINK_SPEED_100M_HD:
+ speed = RTE_ETH_SPEED_NUM_100M;
+ duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
- case ETH_LINK_SPEED_100M:
- speed = ETH_SPEED_NUM_100M;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_100M:
+ speed = RTE_ETH_SPEED_NUM_100M;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_1G:
- speed = ETH_SPEED_NUM_1G;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_1G:
+ speed = RTE_ETH_SPEED_NUM_1G;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_2_5G:
- speed = ETH_SPEED_NUM_2_5G;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_2_5G:
+ speed = RTE_ETH_SPEED_NUM_2_5G;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
- case ETH_LINK_SPEED_10G:
- speed = ETH_SPEED_NUM_10G;
- duplex = ETH_LINK_FULL_DUPLEX;
+ case RTE_ETH_LINK_SPEED_10G:
+ speed = RTE_ETH_SPEED_NUM_10G;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
default:
- speed = ETH_SPEED_NUM_NONE;
- duplex = ETH_LINK_FULL_DUPLEX;
+ speed = RTE_ETH_SPEED_NUM_NONE;
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
}
/* Set link speed */
@@ -556,30 +556,30 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_mac_addrs = DPAA_MAX_MAC_FILTER;
dev_info->max_hash_mac_addrs = 0;
dev_info->max_vfs = 0;
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
if (fif->mac_type == fman_mac_1g) {
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
- | ETH_LINK_SPEED_10M
- | ETH_LINK_SPEED_100M_HD
- | ETH_LINK_SPEED_100M
- | ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G;
} else if (fif->mac_type == fman_mac_2_5g) {
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
- | ETH_LINK_SPEED_10M
- | ETH_LINK_SPEED_100M_HD
- | ETH_LINK_SPEED_100M
- | ETH_LINK_SPEED_1G
- | ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G
+ | RTE_ETH_LINK_SPEED_2_5G;
} else if (fif->mac_type == fman_mac_10g) {
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
- | ETH_LINK_SPEED_10M
- | ETH_LINK_SPEED_100M_HD
- | ETH_LINK_SPEED_100M
- | ETH_LINK_SPEED_1G
- | ETH_LINK_SPEED_2_5G
- | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G
+ | RTE_ETH_LINK_SPEED_2_5G
+ | RTE_ETH_LINK_SPEED_10G;
} else {
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -612,13 +612,13 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
- {DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
- {DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
- {DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
- {DEV_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
- {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+ {RTE_ETH_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
+ {RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+ {RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+ {RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+ {RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
};
/* Update Rx offload info */
@@ -645,14 +645,14 @@ dpaa_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} tx_offload_map[] = {
- {DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
- {DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
- {DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
- {DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
- {DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
- {DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
- {DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+ {RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+ {RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+ {RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+ {RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
};
/* Update Tx offload info */
@@ -686,7 +686,7 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
ret = dpaa_get_link_status(__fif->node_name, link);
if (ret)
return ret;
- if (link->link_status == ETH_LINK_DOWN &&
+ if (link->link_status == RTE_ETH_LINK_DOWN &&
wait_to_complete)
rte_delay_ms(CHECK_INTERVAL);
else
@@ -697,15 +697,15 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
}
if (ioctl_version < 2) {
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
if (fif->mac_type == fman_mac_1g)
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
else if (fif->mac_type == fman_mac_2_5g)
- link->link_speed = ETH_SPEED_NUM_2_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
else if (fif->mac_type == fman_mac_10g)
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
else
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -981,7 +981,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
;
} else if (dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SCATTER) {
+ RTE_ETH_RX_OFFLOAD_SCATTER) {
if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
buffsz * DPAA_SGT_MAX_ENTRIES) {
DPAA_PMD_ERR("max RxPkt size %d too big to fit "
@@ -1303,7 +1303,7 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
- dpaa_update_link_status(__fif->node_name, ETH_LINK_DOWN);
+ dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
else
return dpaa_eth_dev_stop(dev);
return 0;
@@ -1319,7 +1319,7 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
- dpaa_update_link_status(__fif->node_name, ETH_LINK_UP);
+ dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
else
dpaa_eth_dev_start(dev);
return 0;
@@ -1349,10 +1349,10 @@ dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
return -EINVAL;
}
- if (fc_conf->mode == RTE_FC_NONE) {
+ if (fc_conf->mode == RTE_ETH_FC_NONE) {
return 0;
- } else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
- fc_conf->mode == RTE_FC_FULL) {
+ } else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE ||
+ fc_conf->mode == RTE_ETH_FC_FULL) {
fman_if_set_fc_threshold(dev->process_private,
fc_conf->high_water,
fc_conf->low_water,
@@ -1396,11 +1396,11 @@ dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
}
ret = fman_if_get_fc_threshold(dev->process_private);
if (ret) {
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
fc_conf->pause_time =
fman_if_get_fc_quanta(dev->process_private);
} else {
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return 0;
@@ -1663,10 +1663,10 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf,
fc_conf = dpaa_intf->fc_conf;
ret = fman_if_get_fc_threshold(fman_intf);
if (ret) {
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
fc_conf->pause_time = fman_if_get_fc_quanta(fman_intf);
} else {
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b5728e09c29f..c868e9d5bd9b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -74,11 +74,11 @@
#define DPAA_DEBUG_FQ_TX_ERROR 1
#define DPAA_RSS_OFFLOAD_ALL ( \
- ETH_RSS_L2_PAYLOAD | \
- ETH_RSS_IP | \
- ETH_RSS_UDP | \
- ETH_RSS_TCP | \
- ETH_RSS_SCTP)
+ RTE_ETH_RSS_L2_PAYLOAD | \
+ RTE_ETH_RSS_IP | \
+ RTE_ETH_RSS_UDP | \
+ RTE_ETH_RSS_TCP | \
+ RTE_ETH_RSS_SCTP)
#define DPAA_TX_CKSUM_OFFLOAD_MASK ( \
PKT_TX_IP_CKSUM | \
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index c5b5ec869519..1ccd03602790 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -394,7 +394,7 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
if (req_dist_set % 2 != 0) {
dist_field = 1U << loop;
switch (dist_field) {
- case ETH_RSS_L2_PAYLOAD:
+ case RTE_ETH_RSS_L2_PAYLOAD:
if (l2_configured)
break;
@@ -404,9 +404,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_ETH;
break;
- case ETH_RSS_IPV4:
- case ETH_RSS_FRAG_IPV4:
- case ETH_RSS_NONFRAG_IPV4_OTHER:
+ case RTE_ETH_RSS_IPV4:
+ case RTE_ETH_RSS_FRAG_IPV4:
+ case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
if (ipv4_configured)
break;
@@ -415,10 +415,10 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_IPV4;
break;
- case ETH_RSS_IPV6:
- case ETH_RSS_FRAG_IPV6:
- case ETH_RSS_NONFRAG_IPV6_OTHER:
- case ETH_RSS_IPV6_EX:
+ case RTE_ETH_RSS_IPV6:
+ case RTE_ETH_RSS_FRAG_IPV6:
+ case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+ case RTE_ETH_RSS_IPV6_EX:
if (ipv6_configured)
break;
@@ -427,9 +427,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_IPV6;
break;
- case ETH_RSS_NONFRAG_IPV4_TCP:
- case ETH_RSS_NONFRAG_IPV6_TCP:
- case ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_IPV6_TCP_EX:
if (tcp_configured)
break;
@@ -438,9 +438,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_TCP;
break;
- case ETH_RSS_NONFRAG_IPV4_UDP:
- case ETH_RSS_NONFRAG_IPV6_UDP:
- case ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_IPV6_UDP_EX:
if (udp_configured)
break;
@@ -449,8 +449,8 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
= HEADER_TYPE_UDP;
break;
- case ETH_RSS_NONFRAG_IPV4_SCTP:
- case ETH_RSS_NONFRAG_IPV6_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
if (sctp_configured)
break;
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 641e7027f12e..7c92b2a42e3f 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -216,7 +216,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
if (req_dist_set % 2 != 0) {
dist_field = 1ULL << loop;
switch (dist_field) {
- case ETH_RSS_L2_PAYLOAD:
+ case RTE_ETH_RSS_L2_PAYLOAD:
if (l2_configured)
break;
@@ -233,7 +233,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_MPLS:
+ case RTE_ETH_RSS_MPLS:
if (mpls_configured)
break;
@@ -270,13 +270,13 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_IPV4:
- case ETH_RSS_FRAG_IPV4:
- case ETH_RSS_NONFRAG_IPV4_OTHER:
- case ETH_RSS_IPV6:
- case ETH_RSS_FRAG_IPV6:
- case ETH_RSS_NONFRAG_IPV6_OTHER:
- case ETH_RSS_IPV6_EX:
+ case RTE_ETH_RSS_IPV4:
+ case RTE_ETH_RSS_FRAG_IPV4:
+ case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
+ case RTE_ETH_RSS_IPV6:
+ case RTE_ETH_RSS_FRAG_IPV6:
+ case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+ case RTE_ETH_RSS_IPV6_EX:
if (l3_configured)
break;
@@ -314,12 +314,12 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_NONFRAG_IPV4_TCP:
- case ETH_RSS_NONFRAG_IPV6_TCP:
- case ETH_RSS_NONFRAG_IPV4_UDP:
- case ETH_RSS_NONFRAG_IPV6_UDP:
- case ETH_RSS_IPV6_TCP_EX:
- case ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_IPV6_UDP_EX:
if (l4_configured)
break;
@@ -346,8 +346,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
i++;
break;
- case ETH_RSS_NONFRAG_IPV4_SCTP:
- case ETH_RSS_NONFRAG_IPV6_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
if (sctp_configured)
break;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e22..23bb985b95e9 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -38,34 +38,34 @@
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_SCTP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_TIMESTAMP;
+ RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP;
/* Rx offloads which cannot be disabled */
static uint64_t dev_rx_offloads_nodis =
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_SCATTER;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
/* Supported Tx offloads */
static uint64_t dev_tx_offloads_sup =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MT_LOCKFREE |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Tx offloads which cannot be disabled */
static uint64_t dev_tx_offloads_nodis =
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* enable timestamp in mbuf */
bool dpaa2_enable_ts[RTE_MAX_ETHPORTS];
@@ -143,7 +143,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
PMD_INIT_FUNC_TRACE();
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* VLAN Filter not avaialble */
if (!priv->max_vlan_filters) {
DPAA2_PMD_INFO("VLAN filter not available");
@@ -151,7 +151,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
if (dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ret = dpni_enable_vlan_filter(dpni, CMD_PRI_LOW,
priv->token, true);
else
@@ -252,13 +252,13 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_rx_offloads_nodis;
dev_info->tx_offload_capa = dev_tx_offloads_sup |
dev_tx_offloads_nodis;
- dev_info->speed_capa = ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_10G;
dev_info->max_hash_mac_addrs = 0;
dev_info->max_vfs = 0;
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
dev_info->flow_type_rss_offloads = DPAA2_RSS_OFFLOAD_ALL;
dev_info->default_rxportconf.burst_size = dpaa2_dqrr_size;
@@ -271,10 +271,10 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
if (dpaa2_svr_family == SVR_LX2160A) {
- dev_info->speed_capa |= ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
}
return 0;
@@ -292,16 +292,16 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_CHECKSUM, " Checksum,"},
- {DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
- {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
- {DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
- {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
- {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
- {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
- {DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
+ {RTE_ETH_RX_OFFLOAD_CHECKSUM, " Checksum,"},
+ {RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
+ {RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
+ {RTE_ETH_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
+ {RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+ {RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"},
+ {RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"}
};
/* Update Rx offload info */
@@ -328,15 +328,15 @@ dpaa2_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} tx_offload_map[] = {
- {DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
- {DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
- {DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
- {DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
- {DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
- {DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
- {DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
- {DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
- {DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+ {RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+ {RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+ {RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+ {RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+ {RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+ {RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+ {RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+ {RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
};
/* Update Tx offload info */
@@ -559,7 +559,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
if (eth_conf->rxmode.max_rx_pkt_len <= DPAA2_MAX_RX_PKT_LEN) {
ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
priv->token, eth_conf->rxmode.max_rx_pkt_len
@@ -578,7 +578,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
}
- if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (eth_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) {
ret = dpaa2_setup_flow_dist(dev,
eth_conf->rx_adv_conf.rss_conf.rss_hf,
@@ -592,12 +592,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
}
- if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
rx_l3_csum_offload = true;
- if ((rx_offloads & DEV_RX_OFFLOAD_UDP_CKSUM) ||
- (rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) ||
- (rx_offloads & DEV_RX_OFFLOAD_SCTP_CKSUM))
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) ||
+ (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) ||
+ (rx_offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM))
rx_l4_csum_offload = true;
ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -615,7 +615,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
#if !defined(RTE_LIBRTE_IEEE1588)
- if (rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
#endif
{
ret = rte_mbuf_dyn_rx_timestamp_register(
@@ -628,12 +628,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
dpaa2_enable_ts[dev->data->port_id] = true;
}
- if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
tx_l3_csum_offload = true;
- if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ||
- (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
- (tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+ if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ||
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+ (tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM))
tx_l4_csum_offload = true;
ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -665,8 +665,8 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
}
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
- dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+ dpaa2_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
dpaa2_tm_init(dev);
@@ -1477,10 +1477,10 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (frame_size > DPAA2_ETH_MAX_LEN)
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -1881,7 +1881,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret);
return -1;
}
- if (state.up == ETH_LINK_DOWN &&
+ if (state.up == RTE_ETH_LINK_DOWN &&
wait_to_complete)
rte_delay_ms(CHECK_INTERVAL);
else
@@ -1893,9 +1893,9 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
link.link_speed = state.rate;
if (state.options & DPNI_LINK_OPT_HALF_DUPLEX)
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
else
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
ret = rte_eth_linkstatus_set(dev, &link);
if (ret == -1)
@@ -2056,9 +2056,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
* No TX side flow control (send Pause frame disabled)
*/
if (!(state.options & DPNI_LINK_OPT_ASYM_PAUSE))
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
} else {
/* DPNI_LINK_OPT_PAUSE not set
* if ASYM_PAUSE set,
@@ -2068,9 +2068,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
* Flow control disabled
*/
if (state.options & DPNI_LINK_OPT_ASYM_PAUSE)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return ret;
@@ -2114,14 +2114,14 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
/* update cfg with fc_conf */
switch (fc_conf->mode) {
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
/* Full flow control;
* OPT_PAUSE set, ASYM_PAUSE not set
*/
cfg.options |= DPNI_LINK_OPT_PAUSE;
cfg.options &= ~DPNI_LINK_OPT_ASYM_PAUSE;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
/* Enable RX flow control
* OPT_PAUSE not set;
* ASYM_PAUSE set;
@@ -2129,7 +2129,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
cfg.options &= ~DPNI_LINK_OPT_PAUSE;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
/* Enable TX Flow control
* OPT_PAUSE set
* ASYM_PAUSE set
@@ -2137,7 +2137,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
cfg.options |= DPNI_LINK_OPT_PAUSE;
cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
break;
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
/* Disable Flow control
* OPT_PAUSE not set
* ASYM_PAUSE not set
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index b9c729f6cdc0..ca75a2175524 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -65,12 +65,12 @@
#define DPAA2_TX_CONF_ENABLE 0x08
#define DPAA2_RSS_OFFLOAD_ALL ( \
- ETH_RSS_L2_PAYLOAD | \
- ETH_RSS_IP | \
- ETH_RSS_UDP | \
- ETH_RSS_TCP | \
- ETH_RSS_SCTP | \
- ETH_RSS_MPLS)
+ RTE_ETH_RSS_L2_PAYLOAD | \
+ RTE_ETH_RSS_IP | \
+ RTE_ETH_RSS_UDP | \
+ RTE_ETH_RSS_TCP | \
+ RTE_ETH_RSS_SCTP | \
+ RTE_ETH_RSS_MPLS)
/* LX2 FRC Parsed values (Little Endian) */
#define DPAA2_PKT_TYPE_ETHER 0x0060
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f40369e2c3f9..7c77243b5d1a 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -773,7 +773,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
#endif
if (eth_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP)
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
rte_vlan_strip(bufs[num_rx]);
dq_storage++;
@@ -987,7 +987,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
eth_data->port_id);
if (eth_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP) {
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
rte_vlan_strip(bufs[num_rx]);
}
@@ -1230,7 +1230,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if (unlikely(((*bufs)->ol_flags
& PKT_TX_VLAN_PKT) ||
(eth_data->dev_conf.txmode.offloads
- & DEV_TX_OFFLOAD_VLAN_INSERT))) {
+ & RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
ret = rte_vlan_insert(bufs);
if (ret)
goto send_n_return;
@@ -1273,7 +1273,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN_PKT) ||
(eth_data->dev_conf.txmode.offloads
- & DEV_TX_OFFLOAD_VLAN_INSERT))) {
+ & RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
int ret = rte_vlan_insert(bufs);
if (ret)
goto send_n_return;
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 3b4d9c3ee6f4..ca488fea966f 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -81,15 +81,15 @@
#define E1000_FTQF_QUEUE_ENABLE 0x00000100
#define IGB_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
/*
* The overhead from MTU to max frame size.
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b0275..81f8bc3cd746 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -599,8 +599,8 @@ eth_em_start(struct rte_eth_dev *dev)
e1000_clear_hw_cntrs_base_generic(hw);
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK | \
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = eth_em_vlan_offload_set(dev, mask);
if (ret) {
PMD_INIT_LOG(ERR, "Unable to update vlan offload");
@@ -613,39 +613,39 @@ eth_em_start(struct rte_eth_dev *dev)
/* Setup link speed and duplex */
speeds = &dev->data->dev_conf.link_speeds;
- if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
hw->mac.autoneg = 1;
} else {
num_speeds = 0;
- autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+ autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
/* Reset */
hw->phy.autoneg_advertised = 0;
- if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+ if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
num_speeds = -1;
goto error_invalid_config;
}
- if (*speeds & ETH_LINK_SPEED_10M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_10M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M) {
hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M) {
hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_1G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_1G) {
hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
num_speeds++;
}
@@ -1104,9 +1104,9 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_mtu_seg_max = EM_TX_MAX_MTU_SEG,
};
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G;
/* Preferred queue parameters */
dev_info->default_rxportconf.nb_queues = 1;
@@ -1164,17 +1164,17 @@ eth_em_link_update(struct rte_eth_dev *dev, int wait_to_complete)
uint16_t duplex, speed;
hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
link.link_duplex = (duplex == FULL_DUPLEX) ?
- ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
+ RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
link.link_speed = speed;
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
} else {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_status = ETH_LINK_DOWN;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -1426,15 +1426,15 @@ eth_em_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if(mask & ETH_VLAN_STRIP_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if(mask & RTE_ETH_VLAN_STRIP_MASK){
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
em_vlan_hw_strip_enable(dev);
else
em_vlan_hw_strip_disable(dev);
}
- if(mask & ETH_VLAN_FILTER_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if(mask & RTE_ETH_VLAN_FILTER_MASK){
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
em_vlan_hw_filter_enable(dev);
else
em_vlan_hw_filter_disable(dev);
@@ -1603,7 +1603,7 @@ eth_em_interrupt_action(struct rte_eth_dev *dev,
if (link.link_status) {
PMD_INIT_LOG(INFO, " Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id, link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down", dev->data->port_id);
@@ -1685,13 +1685,13 @@ eth_em_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
rx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -1820,11 +1820,11 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if (frame_size > E1000_ETH_MAX_LEN) {
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
} else {
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
rctl &= ~E1000_RCTL_LPE;
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd0074..cf672c32277b 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -93,7 +93,7 @@ struct em_rx_queue {
struct em_rx_entry *sw_ring; /**< address of RX software ring. */
struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */
- uint64_t offloads; /**< Offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Offloads of RTE_ETH_RX_OFFLOAD_* */
uint16_t nb_rx_desc; /**< number of RX descriptors. */
uint16_t rx_tail; /**< current value of RDT register. */
uint16_t nb_rx_hold; /**< number of held free RX desc. */
@@ -172,7 +172,7 @@ struct em_tx_queue {
uint8_t wthresh; /**< Write-back threshold register. */
struct em_ctx_info ctx_cache;
/**< Hardware context history.*/
- uint64_t offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
};
#if 1
@@ -1168,11 +1168,11 @@ em_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
RTE_SET_USED(dev);
tx_offload_capa =
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
return tx_offload_capa;
}
@@ -1367,15 +1367,15 @@ em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
max_rx_pktlen = em_get_max_pktlen(dev);
rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
if (max_rx_pktlen > RTE_ETHER_MAX_LEN)
- rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ rx_offload_capa |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
return rx_offload_capa;
}
@@ -1468,7 +1468,7 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
rxq->rx_free_thresh = rx_conf->rx_free_thresh;
rxq->queue_id = queue_idx;
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1806,7 +1806,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1839,7 +1839,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
* to avoid splitting packets that don't fit into
* one buffer.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ||
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME ||
rctl_bsize < RTE_ETHER_MAX_LEN) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
@@ -1849,7 +1849,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
}
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->rx_pkt_burst = eth_em_recv_scattered_pkts;
@@ -1862,7 +1862,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
*/
rxcsum = E1000_READ_REG(hw, E1000_RXCSUM);
- if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= E1000_RXCSUM_IPOFL;
else
rxcsum &= ~E1000_RXCSUM_IPOFL;
@@ -1874,21 +1874,21 @@ eth_em_rx_init(struct rte_eth_dev *dev)
if ((hw->mac.type == e1000_ich9lan ||
hw->mac.type == e1000_pch2lan ||
hw->mac.type == e1000_ich10lan) &&
- rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
u32 rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(0));
E1000_WRITE_REG(hw, E1000_RXDCTL(0), rxdctl | 3);
E1000_WRITE_REG(hw, E1000_ERT, 0x100 | (1 << 13));
}
if (hw->mac.type == e1000_pch2lan) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
e1000_lv_jumbo_workaround_ich8lan(hw, TRUE);
else
e1000_lv_jumbo_workaround_ich8lan(hw, FALSE);
}
/* Setup the Receive Control Register. */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
else
rctl |= E1000_RCTL_SECRC; /* Strip Ethernet CRC. */
@@ -1908,7 +1908,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
rctl |= E1000_RCTL_LPE;
else
rctl &= ~E1000_RCTL_LPE;
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 10ee0f33415a..7a35d7d89eb1 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -1082,21 +1082,21 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
uint16_t nb_rx_q = dev->data->nb_rx_queues;
uint16_t nb_tx_q = dev->data->nb_tx_queues;
- if ((rx_mq_mode & ETH_MQ_RX_DCB_FLAG) ||
- tx_mq_mode == ETH_MQ_TX_DCB ||
- tx_mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if ((rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) ||
+ tx_mq_mode == RTE_ETH_MQ_TX_DCB ||
+ tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
PMD_INIT_LOG(ERR, "DCB mode is not supported.");
return -EINVAL;
}
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* Check multi-queue mode.
- * To no break software we accept ETH_MQ_RX_NONE as this might
+ * To no break software we accept RTE_ETH_MQ_RX_NONE as this might
* be used to turn off VLAN filter.
*/
- if (rx_mq_mode == ETH_MQ_RX_NONE ||
- rx_mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+ if (rx_mq_mode == RTE_ETH_MQ_RX_NONE ||
+ rx_mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
} else {
/* Only support one queue on VFs.
@@ -1108,12 +1108,12 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
/* TX mode is not used here, so mode might be ignored.*/
- if (tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+ if (tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
/* SRIOV only works in VMDq enable mode */
PMD_INIT_LOG(WARNING, "SRIOV is active,"
" TX mode %d is not supported. "
" Driver will behave as %d mode.",
- tx_mq_mode, ETH_MQ_TX_VMDQ_ONLY);
+ tx_mq_mode, RTE_ETH_MQ_TX_VMDQ_ONLY);
}
/* check valid queue number */
@@ -1126,17 +1126,17 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
/* To no break software that set invalid mode, only display
* warning if invalid mode is used.
*/
- if (rx_mq_mode != ETH_MQ_RX_NONE &&
- rx_mq_mode != ETH_MQ_RX_VMDQ_ONLY &&
- rx_mq_mode != ETH_MQ_RX_RSS) {
+ if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rx_mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY &&
+ rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
/* RSS together with VMDq not supported*/
PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
rx_mq_mode);
return -EINVAL;
}
- if (tx_mq_mode != ETH_MQ_TX_NONE &&
- tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+ if (tx_mq_mode != RTE_ETH_MQ_TX_NONE &&
+ tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
PMD_INIT_LOG(WARNING, "TX mode %d is not supported."
" Due to txmode is meaningless in this"
" driver, just ignore.",
@@ -1155,8 +1155,8 @@ eth_igb_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multipe queue mode checking */
ret = igb_check_mq_mode(dev);
@@ -1296,8 +1296,8 @@ eth_igb_start(struct rte_eth_dev *dev)
/*
* VLAN Offload Settings
*/
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK | \
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = eth_igb_vlan_offload_set(dev, mask);
if (ret) {
PMD_INIT_LOG(ERR, "Unable to set vlan offload");
@@ -1305,7 +1305,7 @@ eth_igb_start(struct rte_eth_dev *dev)
return ret;
}
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
/* Enable VLAN filter since VMDq always use VLAN filter */
igb_vmdq_vlan_hw_filter_enable(dev);
}
@@ -1319,39 +1319,39 @@ eth_igb_start(struct rte_eth_dev *dev)
/* Setup link speed and duplex */
speeds = &dev->data->dev_conf.link_speeds;
- if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
hw->mac.autoneg = 1;
} else {
num_speeds = 0;
- autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+ autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
/* Reset */
hw->phy.autoneg_advertised = 0;
- if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+ if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
num_speeds = -1;
goto error_invalid_config;
}
- if (*speeds & ETH_LINK_SPEED_10M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_10M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M) {
hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M) {
hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_1G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_1G) {
hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
num_speeds++;
}
@@ -2194,21 +2194,21 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
case e1000_82576:
dev_info->max_rx_queues = 16;
dev_info->max_tx_queues = 16;
- dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
dev_info->vmdq_queue_num = 16;
break;
case e1000_82580:
dev_info->max_rx_queues = 8;
dev_info->max_tx_queues = 8;
- dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
dev_info->vmdq_queue_num = 8;
break;
case e1000_i350:
dev_info->max_rx_queues = 8;
dev_info->max_tx_queues = 8;
- dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
dev_info->vmdq_queue_num = 8;
break;
@@ -2234,7 +2234,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
return -EINVAL;
}
dev_info->hash_key_size = IGB_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = IGB_RSS_OFFLOAD_ALL;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -2260,9 +2260,9 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = rx_desc_lim;
dev_info->tx_desc_lim = tx_desc_lim;
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G;
dev_info->max_mtu = dev_info->max_rx_pktlen - E1000_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2305,12 +2305,12 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_rx_bufsize = 256; /* See BSIZE field of RCTL register. */
dev_info->max_rx_pktlen = 0x3FFF; /* See RLPML register. */
dev_info->max_mac_addrs = hw->mac.rar_entry_count;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
switch (hw->mac.type) {
case e1000_vfadapt:
dev_info->max_rx_queues = 2;
@@ -2411,17 +2411,17 @@ eth_igb_link_update(struct rte_eth_dev *dev, int wait_to_complete)
uint16_t duplex, speed;
hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
link.link_duplex = (duplex == FULL_DUPLEX) ?
- ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
+ RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
link.link_speed = speed;
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
} else if (!link_check) {
link.link_speed = 0;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_status = ETH_LINK_DOWN;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -2597,7 +2597,7 @@ eth_igb_vlan_tpid_set(struct rte_eth_dev *dev,
qinq &= E1000_CTRL_EXT_EXT_VLAN;
/* only outer TPID of double VLAN can be configured*/
- if (qinq && vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (qinq && vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
reg = E1000_READ_REG(hw, E1000_VET);
reg = (reg & (~E1000_VET_VET_EXT)) |
((uint32_t)tpid << E1000_VET_VET_EXT_SHIFT);
@@ -2686,7 +2686,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
E1000_WRITE_REG(hw, E1000_RLPML,
dev->data->dev_conf.rxmode.max_rx_pkt_len);
}
@@ -2704,7 +2704,7 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
E1000_WRITE_REG(hw, E1000_RLPML,
dev->data->dev_conf.rxmode.max_rx_pkt_len +
VLAN_TAG_SIZE);
@@ -2716,22 +2716,22 @@ eth_igb_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if(mask & ETH_VLAN_STRIP_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if(mask & RTE_ETH_VLAN_STRIP_MASK){
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
igb_vlan_hw_strip_enable(dev);
else
igb_vlan_hw_strip_disable(dev);
}
- if(mask & ETH_VLAN_FILTER_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if(mask & RTE_ETH_VLAN_FILTER_MASK){
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
igb_vlan_hw_filter_enable(dev);
else
igb_vlan_hw_filter_disable(dev);
}
- if(mask & ETH_VLAN_EXTEND_MASK){
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if(mask & RTE_ETH_VLAN_EXTEND_MASK){
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
igb_vlan_hw_extend_enable(dev);
else
igb_vlan_hw_extend_disable(dev);
@@ -2883,7 +2883,7 @@ eth_igb_interrupt_action(struct rte_eth_dev *dev,
" Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id,
(unsigned)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3037,13 +3037,13 @@ eth_igb_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
rx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -3112,18 +3112,18 @@ eth_igb_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
* on configuration
*/
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
ctrl &= ~E1000_CTRL_RFCE & ~E1000_CTRL_TFCE;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
ctrl |= E1000_CTRL_RFCE;
ctrl &= ~E1000_CTRL_TFCE;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
ctrl |= E1000_CTRL_TFCE;
ctrl &= ~E1000_CTRL_RFCE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
ctrl |= E1000_CTRL_RFCE | E1000_CTRL_TFCE;
break;
default:
@@ -3271,22 +3271,22 @@ igbvf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
dev->data->port_id);
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/*
* VF has no ability to enable/disable HW CRC
* Keep the persistent behavior the same as Host PF
*/
#ifndef RTE_LIBRTE_E1000_PF_DISABLE_STRIP_CRC
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
- conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#else
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#endif
@@ -3584,10 +3584,10 @@ eth_igb_rss_reta_update(struct rte_eth_dev *dev,
uint16_t idx, shift;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
@@ -3625,10 +3625,10 @@ eth_igb_rss_reta_query(struct rte_eth_dev *dev,
uint16_t idx, shift;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
@@ -4407,11 +4407,11 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if (frame_size > E1000_ETH_MAX_LEN) {
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
} else {
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
rctl &= ~E1000_RCTL_LPE;
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
diff --git a/drivers/net/e1000/igb_pf.c b/drivers/net/e1000/igb_pf.c
index 2ce74dd5a9a5..fe355ef6b3b5 100644
--- a/drivers/net/e1000/igb_pf.c
+++ b/drivers/net/e1000/igb_pf.c
@@ -88,7 +88,7 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
if (*vfinfo == NULL)
rte_panic("Cannot allocate memory for private VF data\n");
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_8_POOLS;
RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712af..78c85fdbb51c 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -111,7 +111,7 @@ struct igb_rx_queue {
uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise. */
uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En. */
uint32_t flags; /**< RX flags. */
- uint64_t offloads; /**< offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_RX_OFFLOAD_* */
};
/**
@@ -185,7 +185,7 @@ struct igb_tx_queue {
/**< Start context position for transmit queue. */
struct igb_advctx_info ctx_cache[IGB_CTX_NUM];
/**< Hardware context history.*/
- uint64_t offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
};
#if 1
@@ -1456,13 +1456,13 @@ igb_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
uint64_t tx_offload_capa;
RTE_SET_USED(dev);
- tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
return tx_offload_capa;
}
@@ -1635,20 +1635,20 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (hw->mac.type == e1000_i350 ||
hw->mac.type == e1000_i210 ||
hw->mac.type == e1000_i211)
- rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
return rx_offload_capa;
}
@@ -1729,7 +1729,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1963,23 +1963,23 @@ igb_hw_rss_hash_set(struct e1000_hw *hw, struct rte_eth_rss_conf *rss_conf)
/* Set configured hashing protocols in MRQC register */
rss_hf = rss_conf->rss_hf;
mrqc = E1000_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= E1000_MRQC_RSS_FIELD_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV4_TCP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6;
- if (rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP;
- if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV4_UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP;
- if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP_EX;
E1000_WRITE_REG(hw, E1000_MRQC, mrqc);
}
@@ -2045,23 +2045,23 @@ int eth_igb_rss_hash_conf_get(struct rte_eth_dev *dev,
}
rss_hf = 0;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_EX)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP_EX)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP_EX)
- rss_hf |= ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
rss_conf->rss_hf = rss_hf;
return 0;
}
@@ -2183,15 +2183,15 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
E1000_VMOLR_ROPE | E1000_VMOLR_BAM |
E1000_VMOLR_MPME);
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_UNTAG)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_UNTAG)
vmolr |= E1000_VMOLR_AUPE;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_MC)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
vmolr |= E1000_VMOLR_ROMPE;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
vmolr |= E1000_VMOLR_ROPE;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
vmolr |= E1000_VMOLR_BAM;
- if (cfg->rx_mode & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
vmolr |= E1000_VMOLR_MPME;
E1000_WRITE_REG(hw, E1000_VMOLR(i), vmolr);
@@ -2228,7 +2228,7 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
for (i = 0; i < cfg->nb_pool_maps; i++) {
/* set vlan id in VF register and set the valid bit */
E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE | \
- (cfg->pool_map[i].vlan_id & ETH_VLAN_ID_MAX) | \
+ (cfg->pool_map[i].vlan_id & RTE_ETH_VLAN_ID_MAX) | \
((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT ) & \
E1000_VLVF_POOLSEL_MASK)));
}
@@ -2281,7 +2281,7 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t mrqc;
- if (RTE_ETH_DEV_SRIOV(dev).active == ETH_8_POOLS) {
+ if (RTE_ETH_DEV_SRIOV(dev).active == RTE_ETH_8_POOLS) {
/*
* SRIOV active scheme
* FIXME if support RSS together with VMDq & SRIOV
@@ -2295,14 +2295,14 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
* SRIOV inactive scheme
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
igb_rss_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
/*Configure general VMDQ only RX parameters*/
igb_vmdq_rx_hw_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/* if mq_mode is none, disable rss mode.*/
default:
igb_rss_disable(dev);
@@ -2342,7 +2342,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
uint32_t max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
rctl |= E1000_RCTL_LPE;
@@ -2351,7 +2351,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
* Set maximum packet length by default, and might be updated
* together with enabling/disabling dual VLAN.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
max_len += VLAN_TAG_SIZE;
E1000_WRITE_REG(hw, E1000_RLPML, max_len);
@@ -2387,7 +2387,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -2458,7 +2458,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_RXDCTL(rxq->reg_idx), rxdctl);
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
@@ -2502,16 +2502,16 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
rxcsum |= E1000_RXCSUM_PCSD;
/* Enable both L3/L4 rx checksum offload */
- if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
rxcsum |= E1000_RXCSUM_IPOFL;
else
rxcsum &= ~E1000_RXCSUM_IPOFL;
if (rxmode->offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
rxcsum |= E1000_RXCSUM_TUOFL;
else
rxcsum &= ~E1000_RXCSUM_TUOFL;
- if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= E1000_RXCSUM_CRCOFL;
else
rxcsum &= ~E1000_RXCSUM_CRCOFL;
@@ -2519,7 +2519,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_RXCSUM, rxcsum);
/* Setup the Receive Control Register. */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
/* clear STRCRC bit in all queues */
@@ -2559,7 +2559,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
(hw->mac.mc_filter_type << E1000_RCTL_MO_SHIFT);
/* Make sure VLAN Filters are off. */
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_VMDQ_ONLY)
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY)
rctl &= ~E1000_RCTL_VFE;
/* Don't store bad packets. */
rctl &= ~E1000_RCTL_SBP;
@@ -2758,7 +2758,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_RXDCTL(i), rxdctl);
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 4cebf60a68a7..4e3ee72608f4 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -116,10 +116,10 @@ static const struct ena_stats ena_stats_rx_strings[] = {
#define ENA_STATS_ARRAY_TX ARRAY_SIZE(ena_stats_tx_strings)
#define ENA_STATS_ARRAY_RX ARRAY_SIZE(ena_stats_rx_strings)
-#define QUEUE_OFFLOADS (DEV_TX_OFFLOAD_TCP_CKSUM |\
- DEV_TX_OFFLOAD_UDP_CKSUM |\
- DEV_TX_OFFLOAD_IPV4_CKSUM |\
- DEV_TX_OFFLOAD_TCP_TSO)
+#define QUEUE_OFFLOADS (RTE_ETH_TX_OFFLOAD_TCP_CKSUM |\
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |\
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |\
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
#define MBUF_OFFLOADS (PKT_TX_L4_MASK |\
PKT_TX_IP_CKSUM |\
PKT_TX_TCP_SEG)
@@ -310,7 +310,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
(queue_offloads & QUEUE_OFFLOADS)) {
/* check if TSO is required */
if ((mbuf->ol_flags & PKT_TX_TCP_SEG) &&
- (queue_offloads & DEV_TX_OFFLOAD_TCP_TSO)) {
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
ena_tx_ctx->tso_enable = true;
ena_meta->l4_hdr_len = GET_L4_HDR_LEN(mbuf);
@@ -318,7 +318,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
/* check if L3 checksum is needed */
if ((mbuf->ol_flags & PKT_TX_IP_CKSUM) &&
- (queue_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM))
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM))
ena_tx_ctx->l3_csum_enable = true;
if (mbuf->ol_flags & PKT_TX_IPV6) {
@@ -335,12 +335,12 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
/* check if L4 checksum is needed */
if (((mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) &&
- (queue_offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) {
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP;
ena_tx_ctx->l4_csum_enable = true;
} else if (((mbuf->ol_flags & PKT_TX_L4_MASK) ==
PKT_TX_UDP_CKSUM) &&
- (queue_offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+ (queue_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP;
ena_tx_ctx->l4_csum_enable = true;
} else {
@@ -623,9 +623,9 @@ static int ena_link_update(struct rte_eth_dev *dev,
struct rte_eth_link *link = &dev->data->dev_link;
struct ena_adapter *adapter = dev->data->dev_private;
- link->link_status = adapter->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
- link->link_speed = ETH_SPEED_NUM_NONE;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_status = adapter->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return 0;
}
@@ -684,7 +684,7 @@ static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter)
uint32_t max_frame_len = adapter->max_mtu;
if (adapter->edev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
max_frame_len =
adapter->edev_data->dev_conf.rxmode.max_rx_pkt_len;
@@ -915,7 +915,7 @@ static int ena_start(struct rte_eth_dev *dev)
if (rc)
goto err_start_tx;
- if (adapter->edev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+ if (adapter->edev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
rc = ena_rss_configure(adapter);
if (rc)
goto err_rss_init;
@@ -1854,9 +1854,9 @@ static int ena_dev_configure(struct rte_eth_dev *dev)
adapter->state = ENA_ADAPTER_STATE_CONFIG;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
- dev->data->dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+ dev->data->dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
adapter->tx_selected_offloads = dev->data->dev_conf.txmode.offloads;
adapter->rx_selected_offloads = dev->data->dev_conf.rxmode.offloads;
@@ -1907,36 +1907,36 @@ static int ena_infos_get(struct rte_eth_dev *dev,
ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
dev_info->speed_capa =
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_5G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_5G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
/* Set Tx & Rx features available for device */
if (adapter->offloads.tso4_supported)
- tx_feat |= DEV_TX_OFFLOAD_TCP_TSO;
+ tx_feat |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (adapter->offloads.tx_csum_supported)
- tx_feat |= DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ tx_feat |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
if (adapter->offloads.rx_csum_supported)
- rx_feat |= DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM;
+ rx_feat |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
- rx_feat |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- tx_feat |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ rx_feat |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+ tx_feat |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Inform framework about available features */
dev_info->rx_offload_capa = rx_feat;
if (adapter->offloads.rss_hash_supported)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->rx_queue_offload_capa = rx_feat;
dev_info->tx_offload_capa = tx_feat;
dev_info->tx_queue_offload_capa = tx_feat;
@@ -2100,7 +2100,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
}
#endif
- fill_hash = rx_ring->offloads & DEV_RX_OFFLOAD_RSS_HASH;
+ fill_hash = rx_ring->offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH;
descs_in_use = rx_ring->ring_size -
ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1;
diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h
index 06ac8b06b5cb..3b1844e50982 100644
--- a/drivers/net/ena/ena_ethdev.h
+++ b/drivers/net/ena/ena_ethdev.h
@@ -54,8 +54,8 @@
#define ENA_HASH_KEY_SIZE 40
-#define ENA_ALL_RSS_HF (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_UDP)
+#define ENA_ALL_RSS_HF (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define ENA_IO_TXQ_IDX(q) (2 * (q))
#define ENA_IO_RXQ_IDX(q) (2 * (q) + 1)
--git a/drivers/net/ena/ena_rss.c b/drivers/net/ena/ena_rss.c
index 88afe13da04d..3193faf1fa8c 100644
--- a/drivers/net/ena/ena_rss.c
+++ b/drivers/net/ena/ena_rss.c
@@ -76,7 +76,7 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
if (reta_size == 0 || reta_conf == NULL)
return -EINVAL;
- if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
PMD_DRV_LOG(ERR,
"RSS was not configured for the PMD\n");
return -ENOTSUP;
@@ -140,7 +140,7 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
(reta_size > RTE_RETA_GROUP_SIZE && ((reta_conf + 1) == NULL)))
return -EINVAL;
- if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
PMD_DRV_LOG(ERR,
"RSS was not configured for the PMD\n");
return -ENOTSUP;
@@ -200,34 +200,34 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
/* Convert proto to ETH flag */
switch (proto) {
case ENA_ADMIN_RSS_TCP4:
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
break;
case ENA_ADMIN_RSS_UDP4:
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
break;
case ENA_ADMIN_RSS_TCP6:
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
break;
case ENA_ADMIN_RSS_UDP6:
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
break;
case ENA_ADMIN_RSS_IP4:
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
break;
case ENA_ADMIN_RSS_IP6:
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
break;
case ENA_ADMIN_RSS_IP4_FRAG:
- rss_hf |= ETH_RSS_FRAG_IPV4;
+ rss_hf |= RTE_ETH_RSS_FRAG_IPV4;
break;
case ENA_ADMIN_RSS_NOT_IP:
- rss_hf |= ETH_RSS_L2_PAYLOAD;
+ rss_hf |= RTE_ETH_RSS_L2_PAYLOAD;
break;
case ENA_ADMIN_RSS_TCP6_EX:
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
break;
case ENA_ADMIN_RSS_IP6_EX:
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
break;
default:
break;
@@ -236,10 +236,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
/* Check if only DA or SA is being used for L3. */
switch (fields & ENA_HF_RSS_ALL_L3) {
case ENA_ADMIN_RSS_L3_SA:
- rss_hf |= ETH_RSS_L3_SRC_ONLY;
+ rss_hf |= RTE_ETH_RSS_L3_SRC_ONLY;
break;
case ENA_ADMIN_RSS_L3_DA:
- rss_hf |= ETH_RSS_L3_DST_ONLY;
+ rss_hf |= RTE_ETH_RSS_L3_DST_ONLY;
break;
default:
break;
@@ -248,10 +248,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
/* Check if only DA or SA is being used for L4. */
switch (fields & ENA_HF_RSS_ALL_L4) {
case ENA_ADMIN_RSS_L4_SP:
- rss_hf |= ETH_RSS_L4_SRC_ONLY;
+ rss_hf |= RTE_ETH_RSS_L4_SRC_ONLY;
break;
case ENA_ADMIN_RSS_L4_DP:
- rss_hf |= ETH_RSS_L4_DST_ONLY;
+ rss_hf |= RTE_ETH_RSS_L4_DST_ONLY;
break;
default:
break;
@@ -269,11 +269,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
fields_mask = ENA_ADMIN_RSS_L2_DA | ENA_ADMIN_RSS_L2_SA;
/* Determine which fields of L3 should be used. */
- switch (rss_hf & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) {
- case ETH_RSS_L3_DST_ONLY:
+ switch (rss_hf & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) {
+ case RTE_ETH_RSS_L3_DST_ONLY:
fields_mask |= ENA_ADMIN_RSS_L3_DA;
break;
- case ETH_RSS_L3_SRC_ONLY:
+ case RTE_ETH_RSS_L3_SRC_ONLY:
fields_mask |= ENA_ADMIN_RSS_L3_SA;
break;
default:
@@ -285,11 +285,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
}
/* Determine which fields of L4 should be used. */
- switch (rss_hf & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) {
- case ETH_RSS_L4_DST_ONLY:
+ switch (rss_hf & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) {
+ case RTE_ETH_RSS_L4_DST_ONLY:
fields_mask |= ENA_ADMIN_RSS_L4_DP;
break;
- case ETH_RSS_L4_SRC_ONLY:
+ case RTE_ETH_RSS_L4_SRC_ONLY:
fields_mask |= ENA_ADMIN_RSS_L4_SP;
break;
default:
@@ -335,43 +335,43 @@ static int ena_set_hash_fields(struct ena_com_dev *ena_dev, uint64_t rss_hf)
int rc, i;
/* Turn on appropriate fields for each requested packet type */
- if ((rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) != 0)
selected_fields[ENA_ADMIN_RSS_TCP4].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP4, rss_hf);
- if ((rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) != 0)
selected_fields[ENA_ADMIN_RSS_UDP4].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP4, rss_hf);
- if ((rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) != 0)
selected_fields[ENA_ADMIN_RSS_TCP6].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6, rss_hf);
- if ((rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) != 0)
+ if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) != 0)
selected_fields[ENA_ADMIN_RSS_UDP6].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP6, rss_hf);
- if ((rss_hf & ETH_RSS_IPV4) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV4) != 0)
selected_fields[ENA_ADMIN_RSS_IP4].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4, rss_hf);
- if ((rss_hf & ETH_RSS_IPV6) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV6) != 0)
selected_fields[ENA_ADMIN_RSS_IP6].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6, rss_hf);
- if ((rss_hf & ETH_RSS_FRAG_IPV4) != 0)
+ if ((rss_hf & RTE_ETH_RSS_FRAG_IPV4) != 0)
selected_fields[ENA_ADMIN_RSS_IP4_FRAG].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4_FRAG, rss_hf);
- if ((rss_hf & ETH_RSS_L2_PAYLOAD) != 0)
+ if ((rss_hf & RTE_ETH_RSS_L2_PAYLOAD) != 0)
selected_fields[ENA_ADMIN_RSS_NOT_IP].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_NOT_IP, rss_hf);
- if ((rss_hf & ETH_RSS_IPV6_TCP_EX) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) != 0)
selected_fields[ENA_ADMIN_RSS_TCP6_EX].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6_EX, rss_hf);
- if ((rss_hf & ETH_RSS_IPV6_EX) != 0)
+ if ((rss_hf & RTE_ETH_RSS_IPV6_EX) != 0)
selected_fields[ENA_ADMIN_RSS_IP6_EX].fields =
ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6_EX, rss_hf);
@@ -542,7 +542,7 @@ int ena_rss_hash_conf_get(struct rte_eth_dev *dev,
uint16_t admin_hf;
static bool warn_once;
- if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
PMD_DRV_LOG(ERR, "RSS was not configured for the PMD\n");
return -ENOTSUP;
}
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index b496cd470045..e0fb44edeb41 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -100,27 +100,27 @@ enetc_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
status = enetc_port_rd(enetc_hw, ENETC_PM0_STATUS);
if (status & ENETC_LINK_MODE)
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
else
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
if (status & ENETC_LINK_STATUS)
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
else
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
switch (status & ENETC_LINK_SPEED_MASK) {
case ENETC_LINK_SPEED_1G:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case ENETC_LINK_SPEED_100M:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
default:
case ENETC_LINK_SPEED_10M:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -207,11 +207,11 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
dev_info->max_tx_queues = MAX_TX_RINGS;
dev_info->max_rx_pktlen = ENETC_MAC_MAXFRM_SIZE;
dev_info->rx_offload_capa =
- (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME);
+ (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME);
return 0;
}
@@ -462,7 +462,7 @@ enetc_rx_queue_setup(struct rte_eth_dev *dev,
RTE_ETH_QUEUE_STATE_STOPPED;
}
- rx_ring->crc_len = (uint8_t)((rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+ rx_ring->crc_len = (uint8_t)((rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
RTE_ETHER_CRC_LEN : 0);
return 0;
@@ -679,10 +679,10 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (frame_size > ENETC_ETH_MAX_LEN)
dev->data->dev_conf.rxmode.offloads &=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
@@ -708,7 +708,7 @@ enetc_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
uint32_t max_len;
max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
@@ -723,7 +723,7 @@ enetc_dev_configure(struct rte_eth_dev *dev)
RTE_ETHER_CRC_LEN;
}
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
int config;
config = enetc_port_rd(enetc_hw, ENETC_PM0_CMD_CFG);
@@ -731,10 +731,10 @@ enetc_dev_configure(struct rte_eth_dev *dev)
enetc_port_wr(enetc_hw, ENETC_PM0_CMD_CFG, config);
}
- if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
checksum &= ~L3_CKSUM;
- if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM))
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
checksum &= ~L4_CKSUM;
enetc_port_wr(enetc_hw, ENETC_PAR_PORT_CFG, checksum);
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b8f..d4858326ed7a 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -38,30 +38,30 @@ static const struct vic_speed_capa {
uint16_t sub_devid;
uint32_t capa;
} vic_speed_capa_map[] = {
- { 0x0043, ETH_LINK_SPEED_10G }, /* VIC */
- { 0x0047, ETH_LINK_SPEED_10G }, /* P81E PCIe */
- { 0x0048, ETH_LINK_SPEED_10G }, /* M81KR Mezz */
- { 0x004f, ETH_LINK_SPEED_10G }, /* 1280 Mezz */
- { 0x0084, ETH_LINK_SPEED_10G }, /* 1240 MLOM */
- { 0x0085, ETH_LINK_SPEED_10G }, /* 1225 PCIe */
- { 0x00cd, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1285 PCIe */
- { 0x00ce, ETH_LINK_SPEED_10G }, /* 1225T PCIe */
- { 0x012a, ETH_LINK_SPEED_40G }, /* M4308 */
- { 0x012c, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1340 MLOM */
- { 0x012e, ETH_LINK_SPEED_10G }, /* 1227 PCIe */
- { 0x0137, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1380 Mezz */
- { 0x014d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1385 PCIe */
- { 0x015d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1387 MLOM */
- { 0x0215, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G }, /* 1440 Mezz */
- { 0x0216, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G }, /* 1480 MLOM */
- { 0x0217, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1455 PCIe */
- { 0x0218, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1457 MLOM */
- { 0x0219, ETH_LINK_SPEED_40G }, /* 1485 PCIe */
- { 0x021a, ETH_LINK_SPEED_40G }, /* 1487 MLOM */
- { 0x024a, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1495 PCIe */
- { 0x024b, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1497 MLOM */
+ { 0x0043, RTE_ETH_LINK_SPEED_10G }, /* VIC */
+ { 0x0047, RTE_ETH_LINK_SPEED_10G }, /* P81E PCIe */
+ { 0x0048, RTE_ETH_LINK_SPEED_10G }, /* M81KR Mezz */
+ { 0x004f, RTE_ETH_LINK_SPEED_10G }, /* 1280 Mezz */
+ { 0x0084, RTE_ETH_LINK_SPEED_10G }, /* 1240 MLOM */
+ { 0x0085, RTE_ETH_LINK_SPEED_10G }, /* 1225 PCIe */
+ { 0x00cd, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1285 PCIe */
+ { 0x00ce, RTE_ETH_LINK_SPEED_10G }, /* 1225T PCIe */
+ { 0x012a, RTE_ETH_LINK_SPEED_40G }, /* M4308 */
+ { 0x012c, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1340 MLOM */
+ { 0x012e, RTE_ETH_LINK_SPEED_10G }, /* 1227 PCIe */
+ { 0x0137, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1380 Mezz */
+ { 0x014d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1385 PCIe */
+ { 0x015d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1387 MLOM */
+ { 0x0215, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G }, /* 1440 Mezz */
+ { 0x0216, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G }, /* 1480 MLOM */
+ { 0x0217, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1455 PCIe */
+ { 0x0218, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1457 MLOM */
+ { 0x0219, RTE_ETH_LINK_SPEED_40G }, /* 1485 PCIe */
+ { 0x021a, RTE_ETH_LINK_SPEED_40G }, /* 1487 MLOM */
+ { 0x024a, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1495 PCIe */
+ { 0x024b, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1497 MLOM */
{ 0, 0 }, /* End marker */
};
@@ -293,8 +293,8 @@ static int enicpmd_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
ENICPMD_FUNC_TRACE();
offloads = eth_dev->data->dev_conf.rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
enic->ig_vlan_strip_en = 1;
else
enic->ig_vlan_strip_en = 0;
@@ -319,17 +319,17 @@ static int enicpmd_dev_configure(struct rte_eth_dev *eth_dev)
return ret;
}
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
enic->mc_count = 0;
enic->hw_ip_checksum = !!(eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_CHECKSUM);
+ RTE_ETH_RX_OFFLOAD_CHECKSUM);
/* All vlan offload masks to apply the current settings */
- mask = ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = enicpmd_vlan_offload_set(eth_dev, mask);
if (ret) {
dev_err(enic, "Failed to configure VLAN offloads\n");
@@ -431,14 +431,14 @@ static uint32_t speed_capa_from_pci_id(struct rte_eth_dev *eth_dev)
}
/* 1300 and later models are at least 40G */
if (id >= 0x0100)
- return ETH_LINK_SPEED_40G;
+ return RTE_ETH_LINK_SPEED_40G;
/* VFs have subsystem id 0, check device id */
if (id == 0) {
/* Newer VF implies at least 40G model */
if (pdev->id.device_id == PCI_DEVICE_ID_CISCO_VIC_ENET_SN)
- return ETH_LINK_SPEED_40G;
+ return RTE_ETH_LINK_SPEED_40G;
}
- return ETH_LINK_SPEED_10G;
+ return RTE_ETH_LINK_SPEED_10G;
}
static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
@@ -879,7 +879,7 @@ static void enicpmd_dev_rxq_info_get(struct rte_eth_dev *dev,
*/
conf->offloads = enic->rx_offload_capa;
if (!enic->ig_vlan_strip_en)
- conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* rx_thresh and other fields are not applicable for enic */
}
@@ -965,8 +965,8 @@ static int enicpmd_dev_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
static int udp_tunnel_common_check(struct enic *enic,
struct rte_eth_udp_tunnel *tnl)
{
- if (tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN &&
- tnl->prot_type != RTE_TUNNEL_TYPE_GENEVE)
+ if (tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN &&
+ tnl->prot_type != RTE_ETH_TUNNEL_TYPE_GENEVE)
return -ENOTSUP;
if (!enic->overlay_offload) {
ENICPMD_LOG(DEBUG, " overlay offload is not supported\n");
@@ -1006,7 +1006,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
ret = udp_tunnel_common_check(enic, tnl);
if (ret)
return ret;
- vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+ vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
if (vxlan)
port = enic->vxlan_port;
else
@@ -1035,7 +1035,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
ret = udp_tunnel_common_check(enic, tnl);
if (ret)
return ret;
- vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+ vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
if (vxlan)
port = enic->vxlan_port;
else
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6a4..754cf362c6d8 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -430,7 +430,7 @@ int enic_link_update(struct rte_eth_dev *eth_dev)
memset(&link, 0, sizeof(link));
link.link_status = enic_get_link_status(enic);
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_speed = vnic_dev_port_speed(enic->vdev);
return rte_eth_linkstatus_set(eth_dev, &link);
@@ -597,7 +597,7 @@ int enic_enable(struct enic *enic)
}
eth_dev->data->dev_link.link_speed = vnic_dev_port_speed(enic->vdev);
- eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
/* vnic notification of link status has already been turned on in
* enic_dev_init() which is called during probe time. Here we are
@@ -638,11 +638,11 @@ int enic_enable(struct enic *enic)
* and vlan insertion are supported.
*/
simple_tx_offloads = enic->tx_offload_capa &
- (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM);
+ (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
if ((eth_dev->data->dev_conf.txmode.offloads &
~simple_tx_offloads) == 0) {
ENICPMD_LOG(DEBUG, " use the simple tx handler");
@@ -858,7 +858,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
if (enic->rte_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SCATTER) {
+ RTE_ETH_RX_OFFLOAD_SCATTER) {
dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
/* ceil((max pkt len)/mbuf_size) */
mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) / mbuf_size;
@@ -1386,15 +1386,15 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
rss_hash_type = 0;
rss_hf = rss_conf->rss_hf & enic->flow_type_rss_offloads;
if (enic->rq_count > 1 &&
- (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) &&
+ (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) &&
rss_hf != 0) {
rss_enable = 1;
- if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER))
+ if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER))
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV4;
if (enic->udp_rss_weak) {
/*
@@ -1405,12 +1405,12 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
}
}
- if (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_IPV6_EX |
- ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER))
+ if (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_IPV6_EX |
+ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER))
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV6;
- if (rss_hf & (ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX))
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX))
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
- if (rss_hf & (ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX)) {
+ if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX)) {
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV6;
if (enic->udp_rss_weak)
rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
@@ -1751,9 +1751,9 @@ enic_enable_overlay_offload(struct enic *enic)
return -EINVAL;
}
enic->tx_offload_capa |=
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- (enic->geneve ? DEV_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
- (enic->vxlan ? DEV_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ (enic->geneve ? RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
+ (enic->vxlan ? RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
enic->tx_offload_mask |=
PKT_TX_OUTER_IPV6 |
PKT_TX_OUTER_IPV4 |
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index a8f5332a407f..12f734260ca5 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -147,31 +147,31 @@ int enic_get_vnic_config(struct enic *enic)
* IPV4 hash type handles both non-frag and frag packet types.
* TCP/UDP is controlled via a separate flag below.
*/
- enic->flow_type_rss_offloads |= ETH_RSS_IPV4 |
- ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER;
if (ENIC_SETTING(enic, RSSHASH_TCPIPV4))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_TCP;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (ENIC_SETTING(enic, RSSHASH_IPV6))
/*
* The VIC adapter can perform RSS on IPv6 packets with and
* without extension headers. An IPv6 "fragment" is an IPv6
* packet with the fragment extension header.
*/
- enic->flow_type_rss_offloads |= ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_OTHER;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER;
if (ENIC_SETTING(enic, RSSHASH_TCPIPV6))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX;
if (enic->udp_rss_weak)
enic->flow_type_rss_offloads |=
- ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
if (ENIC_SETTING(enic, RSSHASH_UDPIPV4))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_UDP;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (ENIC_SETTING(enic, RSSHASH_UDPIPV6))
- enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
/* Zero offloads if RSS is not enabled */
if (!ENIC_SETTING(enic, RSS))
@@ -201,20 +201,20 @@ int enic_get_vnic_config(struct enic *enic)
enic->tx_queue_offload_capa = 0;
enic->tx_offload_capa =
enic->tx_queue_offload_capa |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
enic->rx_offload_capa =
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
enic->tx_offload_mask =
PKT_TX_IPV6 |
PKT_TX_IPV4 |
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index 8216063a3d8b..9b22a6ce8941 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -17,10 +17,10 @@
const char pmd_failsafe_driver_name[] = FAILSAFE_DRIVER_NAME;
static const struct rte_eth_link eth_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_UP,
- .link_autoneg = ETH_LINK_AUTONEG,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_UP,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG,
};
static int
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 602c04033c18..5f4810051dac 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -326,7 +326,7 @@ int failsafe_rx_intr_install_subdevice(struct sub_device *sdev)
int qid;
struct rte_eth_dev *fsdev;
struct rxq **rxq;
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð(sdev)->data->dev_conf.intr_conf;
fsdev = fs_dev(sdev);
@@ -519,7 +519,7 @@ int
failsafe_rx_intr_install(struct rte_eth_dev *dev)
{
struct fs_priv *priv = PRIV(dev);
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
&priv->data->dev_conf.intr_conf;
if (intr_conf->rxq == 0 || dev->intr_handle != NULL)
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 5ff33e03e034..8cb215651df8 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1182,53 +1182,53 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
* configuring a sub-device.
*/
infos->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_MACSEC_STRIP |
- DEV_RX_OFFLOAD_HEADER_SPLIT |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_SECURITY |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+ RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+ RTE_ETH_RX_OFFLOAD_SECURITY |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
infos->rx_queue_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_MACSEC_STRIP |
- DEV_RX_OFFLOAD_HEADER_SPLIT |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_SECURITY |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+ RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+ RTE_ETH_RX_OFFLOAD_SECURITY |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
infos->tx_offload_capa =
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO;
infos->flow_type_rss_offloads =
- ETH_RSS_IP |
- ETH_RSS_UDP |
- ETH_RSS_TCP;
+ RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP;
infos->dev_capa =
RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 916b856acc4b..7af115399e0f 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -177,7 +177,7 @@ struct fm10k_rx_queue {
uint8_t drop_en;
uint8_t rx_deferred_start; /* don't start this queue in dev start. */
uint16_t rx_ftag_en; /* indicates FTAG RX supported */
- uint64_t offloads; /* offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /* offloads of RTE_ETH_RX_OFFLOAD_* */
};
/*
@@ -209,7 +209,7 @@ struct fm10k_tx_queue {
uint16_t next_rs; /* Next pos to set RS flag */
uint16_t next_dd; /* Next pos to check DD flag */
volatile uint32_t *tail_ptr;
- uint64_t offloads; /* Offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /* Offloads of RTE_ETH_TX_OFFLOAD_* */
uint16_t nb_desc;
uint16_t port_id;
uint8_t tx_deferred_start; /** don't start this queue in dev start. */
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 3236290e4021..b5935d714a37 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -413,12 +413,12 @@ fm10k_check_mq_mode(struct rte_eth_dev *dev)
vmdq_conf = &dev->data->dev_conf.rx_adv_conf.vmdq_rx_conf;
- if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
PMD_INIT_LOG(ERR, "DCB mode is not supported.");
return -EINVAL;
}
- if (!(rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+ if (!(rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
return 0;
if (hw->mac.type == fm10k_mac_vf) {
@@ -449,8 +449,8 @@ fm10k_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multipe queue mode checking */
ret = fm10k_check_mq_mode(dev);
@@ -510,7 +510,7 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
};
- if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS ||
+ if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_RSS ||
dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) {
FM10K_WRITE_REG(hw, FM10K_MRQC(0), 0);
return;
@@ -547,15 +547,15 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
*/
hf = dev_conf->rx_adv_conf.rss_conf.rss_hf;
mrqc = 0;
- mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
if (mrqc == 0) {
PMD_INIT_LOG(ERR, "Specified RSS mode 0x%"PRIx64"is not"
@@ -602,7 +602,7 @@ fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev)
if (hw->mac.type != fm10k_mac_pf)
return;
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
nb_queue_pools = vmdq_conf->nb_queue_pools;
/* no pool number change, no need to update logic port and VLAN/MAC */
@@ -759,7 +759,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
/* It adds dual VLAN length for supporting dual VLAN */
if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
- rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
+ rxq->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
uint32_t reg;
dev->data->scattered_rx = 1;
reg = FM10K_READ_REG(hw, FM10K_SRRCTL(i));
@@ -1145,7 +1145,7 @@ fm10k_dev_start(struct rte_eth_dev *dev)
}
/* Update default vlan when not in VMDQ mode */
- if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+ if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
fm10k_vlan_filter_set(dev, hw->mac.default_vid, true);
fm10k_link_update(dev, 0);
@@ -1222,11 +1222,11 @@ fm10k_link_update(struct rte_eth_dev *dev,
FM10K_DEV_PRIVATE_TO_INFO(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_50G;
- dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+ dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
dev->data->dev_link.link_status =
- dev_info->sm_down ? ETH_LINK_DOWN : ETH_LINK_UP;
- dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
+ dev_info->sm_down ? RTE_ETH_LINK_DOWN : RTE_ETH_LINK_UP;
+ dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
return 0;
}
@@ -1378,7 +1378,7 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
dev_info->max_vfs = pdev->max_vfs;
dev_info->vmdq_pool_base = 0;
dev_info->vmdq_queue_base = 0;
- dev_info->max_vmdq_pools = ETH_32_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_32_POOLS;
dev_info->vmdq_queue_num = FM10K_MAX_QUEUES_PF;
dev_info->rx_queue_offload_capa = fm10k_get_rx_queue_offloads_capa(dev);
dev_info->rx_offload_capa = fm10k_get_rx_port_offloads_capa(dev) |
@@ -1389,15 +1389,15 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
dev_info->hash_key_size = FM10K_RSSRK_SIZE * sizeof(uint32_t);
dev_info->reta_size = FM10K_MAX_RSS_INDICES;
- dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
- ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -1435,9 +1435,9 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
.nb_mtu_seg_max = FM10K_TX_MAX_MTU_SEG,
};
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -1509,7 +1509,7 @@ fm10k_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
return -EINVAL;
}
- if (vlan_id > ETH_VLAN_ID_MAX) {
+ if (vlan_id > RTE_ETH_VLAN_ID_MAX) {
PMD_INIT_LOG(ERR, "Invalid vlan_id: must be < 4096");
return -EINVAL;
}
@@ -1767,21 +1767,21 @@ static uint64_t fm10k_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
{
RTE_SET_USED(dev);
- return (uint64_t)(DEV_RX_OFFLOAD_SCATTER);
+ return (uint64_t)(RTE_ETH_RX_OFFLOAD_SCATTER);
}
static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
{
RTE_SET_USED(dev);
- return (uint64_t)(DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_HEADER_SPLIT |
- DEV_RX_OFFLOAD_RSS_HASH);
+ return (uint64_t)(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
}
static int
@@ -1966,12 +1966,12 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
{
RTE_SET_USED(dev);
- return (uint64_t)(DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO);
+ return (uint64_t)(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO);
}
static int
@@ -2199,15 +2199,15 @@ fm10k_rss_hash_update(struct rte_eth_dev *dev,
return -EINVAL;
mrqc = 0;
- mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
- mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
- mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0;
+ mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0;
/* If the mapping doesn't fit any supported, return */
if (mrqc == 0)
@@ -2244,15 +2244,15 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev,
mrqc = FM10K_READ_REG(hw, FM10K_MRQC(0));
hf = 0;
- hf |= (mrqc & FM10K_MRQC_IPV4) ? ETH_RSS_IPV4 : 0;
- hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6 : 0;
- hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6_EX : 0;
- hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? ETH_RSS_NONFRAG_IPV4_TCP : 0;
- hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_NONFRAG_IPV6_TCP : 0;
- hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP_EX : 0;
- hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? ETH_RSS_NONFRAG_IPV4_UDP : 0;
- hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_NONFRAG_IPV6_UDP : 0;
- hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP_EX : 0;
+ hf |= (mrqc & FM10K_MRQC_IPV4) ? RTE_ETH_RSS_IPV4 : 0;
+ hf |= (mrqc & FM10K_MRQC_IPV6) ? RTE_ETH_RSS_IPV6 : 0;
+ hf |= (mrqc & FM10K_MRQC_IPV6) ? RTE_ETH_RSS_IPV6_EX : 0;
+ hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_TCP : 0;
+ hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0;
+ hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+ hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0;
+ hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0;
+ hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_IPV6_UDP_EX : 0;
rss_conf->rss_hf = hf;
@@ -2607,7 +2607,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
/* first clear the internal SW recording structure */
if (!(dev->data->dev_conf.rxmode.mq_mode &
- ETH_MQ_RX_VMDQ_FLAG))
+ RTE_ETH_MQ_RX_VMDQ_FLAG))
fm10k_vlan_filter_set(dev, hw->mac.default_vid,
false);
@@ -2623,7 +2623,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
MAIN_VSI_POOL_NUMBER);
if (!(dev->data->dev_conf.rxmode.mq_mode &
- ETH_MQ_RX_VMDQ_FLAG))
+ RTE_ETH_MQ_RX_VMDQ_FLAG))
fm10k_vlan_filter_set(dev, hw->mac.default_vid,
true);
diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c
index 83af01dc2da6..50973a662c67 100644
--- a/drivers/net/fm10k/fm10k_rxtx_vec.c
+++ b/drivers/net/fm10k/fm10k_rxtx_vec.c
@@ -208,11 +208,11 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
{
#ifndef RTE_LIBRTE_IEEE1588
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
#ifndef RTE_FM10K_RX_OLFLAGS_ENABLE
/* whithout rx ol_flags, no VP flag report */
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
return -1;
#endif
@@ -221,7 +221,7 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
return -1;
/* no header split support */
- if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
return -1;
return 0;
diff --git a/drivers/net/hinic/base/hinic_pmd_hwdev.c b/drivers/net/hinic/base/hinic_pmd_hwdev.c
index cb9cf6efa287..80f9eb5c3031 100644
--- a/drivers/net/hinic/base/hinic_pmd_hwdev.c
+++ b/drivers/net/hinic/base/hinic_pmd_hwdev.c
@@ -1320,28 +1320,28 @@ hinic_cable_status_event(u8 cmd, void *buf_in, __rte_unused u16 in_size,
static int hinic_link_event_process(struct hinic_hwdev *hwdev,
struct rte_eth_dev *eth_dev, u8 status)
{
- uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
- ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
- ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
- ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+ uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+ RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+ RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+ RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
struct nic_port_info port_info;
struct rte_eth_link link;
int rc = HINIC_OK;
if (!status) {
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
link.link_speed = 0;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
} else {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
memset(&port_info, 0, sizeof(port_info));
rc = hinic_get_port_info(hwdev, &port_info);
if (rc) {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
} else {
link.link_speed = port_speed[port_info.speed %
LINK_SPEED_MAX];
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 1a7240154668..17f32692fb2d 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -311,8 +311,8 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* mtu size is 256~9600 */
if (dev->data->dev_conf.rxmode.max_rx_pkt_len < HINIC_MIN_FRAME_SIZE ||
@@ -338,7 +338,7 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
/* init vlan offoad */
err = hinic_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
if (err) {
PMD_DRV_LOG(ERR, "Initialize vlan filter and strip failed");
(void)hinic_config_mq_mode(dev, FALSE);
@@ -696,15 +696,15 @@ static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
} else {
*speed_capa = 0;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_1G))
- *speed_capa |= ETH_LINK_SPEED_1G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_10G))
- *speed_capa |= ETH_LINK_SPEED_10G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_25G))
- *speed_capa |= ETH_LINK_SPEED_25G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_40G))
- *speed_capa |= ETH_LINK_SPEED_40G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_100G))
- *speed_capa |= ETH_LINK_SPEED_100G;
+ *speed_capa |= RTE_ETH_LINK_SPEED_100G;
}
}
@@ -732,25 +732,25 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
hinic_get_speed_capa(dev, &info->speed_capa);
info->rx_queue_offload_capa = 0;
- info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_RSS_HASH;
+ info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
info->tx_queue_offload_capa = 0;
- info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
info->hash_key_size = HINIC_RSS_KEY_SIZE;
info->reta_size = HINIC_RSS_INDIR_SIZE;
@@ -847,20 +847,20 @@ static int hinic_priv_get_dev_link_status(struct hinic_nic_dev *nic_dev,
u8 port_link_status = 0;
struct nic_port_info port_link_info;
struct hinic_hwdev *nic_hwdev = nic_dev->hwdev;
- uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
- ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
- ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
- ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+ uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+ RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+ RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+ RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
rc = hinic_get_link_status(nic_hwdev, &port_link_status);
if (rc)
return rc;
if (!port_link_status) {
- link->link_status = ETH_LINK_DOWN;
+ link->link_status = RTE_ETH_LINK_DOWN;
link->link_speed = 0;
- link->link_duplex = ETH_LINK_HALF_DUPLEX;
- link->link_autoneg = ETH_LINK_FIXED;
+ link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link->link_autoneg = RTE_ETH_LINK_FIXED;
return HINIC_OK;
}
@@ -902,8 +902,8 @@ static int hinic_link_update(struct rte_eth_dev *dev, int wait_to_complete)
/* Get link status information from hardware */
rc = hinic_priv_get_dev_link_status(nic_dev, &link);
if (rc != HINIC_OK) {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR, "Get link status failed");
goto out;
}
@@ -1552,10 +1552,10 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
frame_size = HINIC_MTU_TO_PKTLEN(mtu);
if (frame_size > HINIC_ETH_MAX_LEN)
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
nic_dev->mtu_size = mtu;
@@ -1664,8 +1664,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
int err;
/* Enable or disable VLAN filter */
- if (mask & ETH_VLAN_FILTER_MASK) {
- on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) ?
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) ?
TRUE : FALSE;
err = hinic_config_vlan_filter(nic_dev->hwdev, on);
if (err == HINIC_MGMT_CMD_UNSUPPORTED) {
@@ -1686,8 +1686,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
/* Enable or disable VLAN stripping */
- if (mask & ETH_VLAN_STRIP_MASK) {
- on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) ?
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) ?
TRUE : FALSE;
err = hinic_set_rx_vlan_offload(nic_dev->hwdev, on);
if (err) {
@@ -1873,13 +1873,13 @@ static int hinic_flow_ctrl_get(struct rte_eth_dev *dev,
fc_conf->autoneg = nic_pause.auto_neg;
if (nic_pause.tx_pause && nic_pause.rx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (nic_pause.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else if (nic_pause.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -1893,14 +1893,14 @@ static int hinic_flow_ctrl_set(struct rte_eth_dev *dev,
nic_pause.auto_neg = fc_conf->autoneg;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_TX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
nic_pause.tx_pause = true;
else
nic_pause.tx_pause = false;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_RX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
nic_pause.rx_pause = true;
else
nic_pause.rx_pause = false;
@@ -1944,7 +1944,7 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
struct nic_rss_type rss_type = {0};
int err = 0;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
PMD_DRV_LOG(WARNING, "RSS is not enabled");
return HINIC_OK;
}
@@ -1965,14 +1965,14 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
}
}
- rss_type.ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
- rss_type.tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
- rss_type.ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
- rss_type.ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
- rss_type.tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
- rss_type.tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
- rss_type.udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
- rss_type.udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+ rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+ rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+ rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+ rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+ rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+ rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+ rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+ rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
err = hinic_set_rss_type(nic_dev->hwdev, tmpl_idx, rss_type);
if (err) {
@@ -2008,7 +2008,7 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
struct nic_rss_type rss_type = {0};
int err;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
PMD_DRV_LOG(WARNING, "RSS is not enabled");
return HINIC_ERROR;
}
@@ -2029,15 +2029,15 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
rss_conf->rss_hf = 0;
rss_conf->rss_hf |= rss_type.ipv4 ?
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4) : 0;
- rss_conf->rss_hf |= rss_type.tcp_ipv4 ? ETH_RSS_NONFRAG_IPV4_TCP : 0;
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4) : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_TCP : 0;
rss_conf->rss_hf |= rss_type.ipv6 ?
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6) : 0;
- rss_conf->rss_hf |= rss_type.ipv6_ext ? ETH_RSS_IPV6_EX : 0;
- rss_conf->rss_hf |= rss_type.tcp_ipv6 ? ETH_RSS_NONFRAG_IPV6_TCP : 0;
- rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? ETH_RSS_IPV6_TCP_EX : 0;
- rss_conf->rss_hf |= rss_type.udp_ipv4 ? ETH_RSS_NONFRAG_IPV4_UDP : 0;
- rss_conf->rss_hf |= rss_type.udp_ipv6 ? ETH_RSS_NONFRAG_IPV6_UDP : 0;
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6) : 0;
+ rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+ rss_conf->rss_hf |= rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0;
+ rss_conf->rss_hf |= rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0;
return HINIC_OK;
}
@@ -2067,7 +2067,7 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
u16 i = 0;
u16 idx, shift;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG))
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG))
return HINIC_OK;
if (reta_size != NIC_RSS_INDIR_SIZE) {
diff --git a/drivers/net/hinic/hinic_pmd_rx.c b/drivers/net/hinic/hinic_pmd_rx.c
index 842399cc4cd8..d347afe9a6a9 100644
--- a/drivers/net/hinic/hinic_pmd_rx.c
+++ b/drivers/net/hinic/hinic_pmd_rx.c
@@ -504,14 +504,14 @@ static void hinic_fill_rss_type(struct nic_rss_type *rss_type,
{
u64 rss_hf = rss_conf->rss_hf;
- rss_type->ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
- rss_type->tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
- rss_type->ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
- rss_type->ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
- rss_type->tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
- rss_type->tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
- rss_type->udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
- rss_type->udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+ rss_type->ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+ rss_type->tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+ rss_type->ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+ rss_type->ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+ rss_type->tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+ rss_type->tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+ rss_type->udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+ rss_type->udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
}
static void hinic_fillout_indir_tbl(struct hinic_nic_dev *nic_dev, u32 *indir)
@@ -588,8 +588,8 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
{
int err, i;
- if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
- nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+ if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
+ nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
nic_dev->num_rss = 0;
if (nic_dev->num_rq > 1) {
/* get rss template id */
@@ -599,7 +599,7 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
PMD_DRV_LOG(WARNING, "Alloc rss template failed");
return err;
}
- nic_dev->flags |= ETH_MQ_RX_RSS_FLAG;
+ nic_dev->flags |= RTE_ETH_MQ_RX_RSS_FLAG;
for (i = 0; i < nic_dev->num_rq; i++)
hinic_add_rq_to_rx_queue_list(nic_dev, i);
}
@@ -610,12 +610,12 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
static void hinic_destroy_num_qps(struct hinic_nic_dev *nic_dev)
{
- if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+ if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
if (hinic_rss_template_free(nic_dev->hwdev,
nic_dev->rss_tmpl_idx))
PMD_DRV_LOG(WARNING, "Free rss template failed");
- nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+ nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
}
}
@@ -641,7 +641,7 @@ int hinic_config_mq_mode(struct rte_eth_dev *dev, bool on)
int ret = 0;
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
ret = hinic_config_mq_rx_rss(nic_dev, on);
break;
default:
@@ -662,7 +662,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
int lro_wqe_num;
int buf_size;
- if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+ if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
if (rss_conf.rss_hf == 0) {
rss_conf.rss_hf = HINIC_RSS_OFFLOAD_ALL;
} else if ((rss_conf.rss_hf & HINIC_RSS_OFFLOAD_ALL) == 0) {
@@ -678,7 +678,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
}
/* Enable both L3/L4 rx checksum offload */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
nic_dev->rx_csum_en = HINIC_RX_CSUM_OFFLOAD_EN;
err = hinic_set_rx_csum_offload(nic_dev->hwdev,
@@ -687,7 +687,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
goto rx_csum_ofl_err;
/* config lro */
- lro_en = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ?
+ lro_en = dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ?
true : false;
max_lro_size = dev->data->dev_conf.rxmode.max_lro_pkt_size;
buf_size = nic_dev->hwdev->nic_io->rq_buf_size;
@@ -726,7 +726,7 @@ void hinic_rx_remove_configure(struct rte_eth_dev *dev)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+ if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
hinic_rss_deinit(nic_dev);
hinic_destroy_num_qps(nic_dev);
}
diff --git a/drivers/net/hinic/hinic_pmd_rx.h b/drivers/net/hinic/hinic_pmd_rx.h
index 8a45f2d9fc50..5c303398b635 100644
--- a/drivers/net/hinic/hinic_pmd_rx.h
+++ b/drivers/net/hinic/hinic_pmd_rx.h
@@ -8,17 +8,17 @@
#define HINIC_DEFAULT_RX_FREE_THRESH 32
#define HINIC_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 |\
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 |\
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
enum rq_completion_fmt {
RQ_COMPLETE_SGE = 1
diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index b71e2e9ea451..953c146d0200 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -1536,7 +1536,7 @@ hns3_dcb_hw_configure(struct hns3_adapter *hns)
return ret;
}
- if (hw->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (hw->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
dcb_rx_conf = &hw->data->dev_conf.rx_adv_conf.dcb_rx_conf;
if (dcb_rx_conf->nb_tcs == 0)
hw->dcb_info.pfc_en = 1; /* tc0 only */
@@ -1693,7 +1693,7 @@ hns3_update_queue_map_configure(struct hns3_adapter *hns)
uint16_t nb_tx_q = hw->data->nb_tx_queues;
int ret;
- if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
return 0;
ret = hns3_dcb_update_tc_queue_mapping(hw, nb_rx_q, nb_tx_q);
@@ -1713,22 +1713,22 @@ static void
hns3_get_fc_mode(struct hns3_hw *hw, enum rte_eth_fc_mode mode)
{
switch (mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
hw->requested_fc_mode = HNS3_FC_NONE;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
hw->requested_fc_mode = HNS3_FC_RX_PAUSE;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
hw->requested_fc_mode = HNS3_FC_TX_PAUSE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
hw->requested_fc_mode = HNS3_FC_FULL;
break;
default:
hw->requested_fc_mode = HNS3_FC_NONE;
hns3_warn(hw, "fc_mode(%u) exceeds member scope and is "
- "configured to RTE_FC_NONE", mode);
+ "configured to RTE_ETH_FC_NONE", mode);
break;
}
}
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 7d37004972bf..64d1da09a707 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -60,29 +60,29 @@ enum hns3_evt_cause {
};
static const struct rte_eth_fec_capa speed_fec_capa_tbl[] = {
- { ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
- { ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
- { ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
- { ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
- { ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
- { ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+ { RTE_ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
RTE_ETH_FEC_MODE_CAPA_MASK(RS) }
};
@@ -500,8 +500,8 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
struct hns3_cmd_desc desc;
int ret;
- if ((vlan_type != ETH_VLAN_TYPE_INNER &&
- vlan_type != ETH_VLAN_TYPE_OUTER)) {
+ if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+ vlan_type != RTE_ETH_VLAN_TYPE_OUTER)) {
hns3_err(hw, "Unsupported vlan type, vlan_type =%d", vlan_type);
return -EINVAL;
}
@@ -514,10 +514,10 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MAC_VLAN_TYPE_ID, false);
rx_req = (struct hns3_rx_vlan_type_cfg_cmd *)desc.data;
- if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
- } else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+ } else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
rx_req->in_fst_vlan_type = rte_cpu_to_le_16(tpid);
@@ -725,11 +725,11 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
rte_spinlock_lock(&hw->lock);
rxmode = &dev->data->dev_conf.rxmode;
tmp_mask = (unsigned int)mask;
- if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
/* ignore vlan filter configuration during promiscuous mode */
if (!dev->data->promiscuous) {
/* Enable or disable VLAN filter */
- enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER ?
+ enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ?
true : false;
ret = hns3_enable_vlan_filter(hns, enable);
@@ -742,9 +742,9 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
}
- if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
- enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP ?
+ enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ?
true : false;
ret = hns3_en_hw_strip_rxvtag(hns, enable);
@@ -1118,7 +1118,7 @@ hns3_init_vlan_config(struct hns3_adapter *hns)
return ret;
}
- ret = hns3_vlan_tpid_configure(hns, ETH_VLAN_TYPE_INNER,
+ ret = hns3_vlan_tpid_configure(hns, RTE_ETH_VLAN_TYPE_INNER,
RTE_ETHER_TYPE_VLAN);
if (ret) {
hns3_err(hw, "tpid set fail in pf, ret =%d", ret);
@@ -1161,7 +1161,7 @@ hns3_restore_vlan_conf(struct hns3_adapter *hns)
if (!hw->data->promiscuous) {
/* restore vlan filter states */
offloads = hw->data->dev_conf.rxmode.offloads;
- enable = offloads & DEV_RX_OFFLOAD_VLAN_FILTER ? true : false;
+ enable = offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ? true : false;
ret = hns3_enable_vlan_filter(hns, enable);
if (ret) {
hns3_err(hw, "failed to restore vlan rx filter conf, "
@@ -1204,7 +1204,7 @@ hns3_dev_configure_vlan(struct rte_eth_dev *dev)
txmode->hw_vlan_reject_untagged);
/* Apply vlan offload setting */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
ret = hns3_vlan_offload_set(dev, mask);
if (ret) {
hns3_err(hw, "dev config rx vlan offload failed, ret = %d",
@@ -2218,9 +2218,9 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
int max_tc = 0;
int i;
- if ((rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG) ||
- (tx_mq_mode == ETH_MQ_TX_VMDQ_DCB ||
- tx_mq_mode == ETH_MQ_TX_VMDQ_ONLY)) {
+ if ((rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) ||
+ (tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB ||
+ tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)) {
hns3_err(hw, "VMDQ is not supported, rx_mq_mode = %d, tx_mq_mode = %d.",
rx_mq_mode, tx_mq_mode);
return -EOPNOTSUPP;
@@ -2228,7 +2228,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
dcb_rx_conf = &dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
dcb_tx_conf = &dev->data->dev_conf.tx_adv_conf.dcb_tx_conf;
- if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
if (dcb_rx_conf->nb_tcs > pf->tc_max) {
hns3_err(hw, "nb_tcs(%u) > max_tc(%u) driver supported.",
dcb_rx_conf->nb_tcs, pf->tc_max);
@@ -2237,7 +2237,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
if (!(dcb_rx_conf->nb_tcs == HNS3_4_TCS ||
dcb_rx_conf->nb_tcs == HNS3_8_TCS)) {
- hns3_err(hw, "on ETH_MQ_RX_DCB_RSS mode, "
+ hns3_err(hw, "on RTE_ETH_MQ_RX_DCB_RSS mode, "
"nb_tcs(%d) != %d or %d in rx direction.",
dcb_rx_conf->nb_tcs, HNS3_4_TCS, HNS3_8_TCS);
return -EINVAL;
@@ -2380,7 +2380,7 @@ hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
uint16_t mtu;
int ret;
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME))
return 0;
/*
@@ -2440,11 +2440,11 @@ hns3_check_link_speed(struct hns3_hw *hw, uint32_t link_speeds)
* configure link_speeds (default 0), which means auto-negotiation.
* In this case, it should return success.
*/
- if (link_speeds == ETH_LINK_SPEED_AUTONEG &&
+ if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG &&
hw->mac.support_autoneg == 0)
return 0;
- if (link_speeds != ETH_LINK_SPEED_AUTONEG) {
+ if (link_speeds != RTE_ETH_LINK_SPEED_AUTONEG) {
ret = hns3_check_port_speed(hw, link_speeds);
if (ret)
return ret;
@@ -2504,15 +2504,15 @@ hns3_dev_configure(struct rte_eth_dev *dev)
if (ret)
goto cfg_err;
- if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
ret = hns3_setup_dcb(dev);
if (ret)
goto cfg_err;
}
/* When RSS is not configured, redirect the packet queue 0 */
- if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
rss_conf = conf->rx_adv_conf.rss_conf;
hw->rss_dis_flag = false;
ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -2533,7 +2533,7 @@ hns3_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
/* config hardware GRO */
- gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+ gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
ret = hns3_config_gro(hw, gro_en);
if (ret)
goto cfg_err;
@@ -2633,10 +2633,10 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (is_jumbo_frame)
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
@@ -2649,15 +2649,15 @@ hns3_get_copper_port_speed_capa(uint32_t supported_speed)
uint32_t speed_capa = 0;
if (supported_speed & HNS3_PHY_LINK_SPEED_10M_HD_BIT)
- speed_capa |= ETH_LINK_SPEED_10M_HD;
+ speed_capa |= RTE_ETH_LINK_SPEED_10M_HD;
if (supported_speed & HNS3_PHY_LINK_SPEED_10M_BIT)
- speed_capa |= ETH_LINK_SPEED_10M;
+ speed_capa |= RTE_ETH_LINK_SPEED_10M;
if (supported_speed & HNS3_PHY_LINK_SPEED_100M_HD_BIT)
- speed_capa |= ETH_LINK_SPEED_100M_HD;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
if (supported_speed & HNS3_PHY_LINK_SPEED_100M_BIT)
- speed_capa |= ETH_LINK_SPEED_100M;
+ speed_capa |= RTE_ETH_LINK_SPEED_100M;
if (supported_speed & HNS3_PHY_LINK_SPEED_1000M_BIT)
- speed_capa |= ETH_LINK_SPEED_1G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G;
return speed_capa;
}
@@ -2668,19 +2668,19 @@ hns3_get_firber_port_speed_capa(uint32_t supported_speed)
uint32_t speed_capa = 0;
if (supported_speed & HNS3_FIBER_LINK_SPEED_1G_BIT)
- speed_capa |= ETH_LINK_SPEED_1G;
+ speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_10G_BIT)
- speed_capa |= ETH_LINK_SPEED_10G;
+ speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_25G_BIT)
- speed_capa |= ETH_LINK_SPEED_25G;
+ speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_40G_BIT)
- speed_capa |= ETH_LINK_SPEED_40G;
+ speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_50G_BIT)
- speed_capa |= ETH_LINK_SPEED_50G;
+ speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_100G_BIT)
- speed_capa |= ETH_LINK_SPEED_100G;
+ speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (supported_speed & HNS3_FIBER_LINK_SPEED_200G_BIT)
- speed_capa |= ETH_LINK_SPEED_200G;
+ speed_capa |= RTE_ETH_LINK_SPEED_200G;
return speed_capa;
}
@@ -2699,7 +2699,7 @@ hns3_get_speed_capa(struct hns3_hw *hw)
hns3_get_firber_port_speed_capa(mac->supported_speed);
if (mac->support_autoneg == 0)
- speed_capa |= ETH_LINK_SPEED_FIXED;
+ speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
return speed_capa;
}
@@ -2725,41 +2725,41 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
info->max_mac_addrs = HNS3_UC_MACADDR_NUM;
info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
- info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_SCTP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_TCP_LRO);
- info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+ info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO);
+ info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
hns3_txvlan_cap_get(hw));
if (hns3_dev_outer_udp_cksum_supported(hw))
- info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
if (hns3_dev_indep_txrx_supported(hw))
info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
if (hns3_dev_ptp_supported(hw))
- info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+ info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
info->rx_desc_lim = (struct rte_eth_desc_lim) {
.nb_max = HNS3_MAX_RING_DESC,
@@ -2843,7 +2843,7 @@ hns3_update_port_link_info(struct rte_eth_dev *eth_dev)
ret = hns3_update_link_info(eth_dev);
if (ret)
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
return ret;
}
@@ -2856,29 +2856,29 @@ hns3_setup_linkstatus(struct rte_eth_dev *eth_dev,
struct hns3_mac *mac = &hw->mac;
switch (mac->link_speed) {
- case ETH_SPEED_NUM_10M:
- case ETH_SPEED_NUM_100M:
- case ETH_SPEED_NUM_1G:
- case ETH_SPEED_NUM_10G:
- case ETH_SPEED_NUM_25G:
- case ETH_SPEED_NUM_40G:
- case ETH_SPEED_NUM_50G:
- case ETH_SPEED_NUM_100G:
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_200G:
if (mac->link_status)
new_link->link_speed = mac->link_speed;
break;
default:
if (mac->link_status)
- new_link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+ new_link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
}
if (!mac->link_status)
- new_link->link_speed = ETH_SPEED_NUM_NONE;
+ new_link->link_speed = RTE_ETH_SPEED_NUM_NONE;
new_link->link_duplex = mac->link_duplex;
- new_link->link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+ new_link->link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
new_link->link_autoneg = mac->link_autoneg;
}
@@ -2898,8 +2898,8 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
if (eth_dev->data->dev_started == 0) {
new_link.link_autoneg = mac->link_autoneg;
new_link.link_duplex = mac->link_duplex;
- new_link.link_speed = ETH_SPEED_NUM_NONE;
- new_link.link_status = ETH_LINK_DOWN;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ new_link.link_status = RTE_ETH_LINK_DOWN;
goto out;
}
@@ -2911,7 +2911,7 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
break;
}
- if (!wait_to_complete || mac->link_status == ETH_LINK_UP)
+ if (!wait_to_complete || mac->link_status == RTE_ETH_LINK_UP)
break;
rte_delay_ms(HNS3_LINK_CHECK_INTERVAL);
@@ -3257,31 +3257,31 @@ hns3_parse_speed(int speed_cmd, uint32_t *speed)
{
switch (speed_cmd) {
case HNS3_CFG_SPEED_10M:
- *speed = ETH_SPEED_NUM_10M;
+ *speed = RTE_ETH_SPEED_NUM_10M;
break;
case HNS3_CFG_SPEED_100M:
- *speed = ETH_SPEED_NUM_100M;
+ *speed = RTE_ETH_SPEED_NUM_100M;
break;
case HNS3_CFG_SPEED_1G:
- *speed = ETH_SPEED_NUM_1G;
+ *speed = RTE_ETH_SPEED_NUM_1G;
break;
case HNS3_CFG_SPEED_10G:
- *speed = ETH_SPEED_NUM_10G;
+ *speed = RTE_ETH_SPEED_NUM_10G;
break;
case HNS3_CFG_SPEED_25G:
- *speed = ETH_SPEED_NUM_25G;
+ *speed = RTE_ETH_SPEED_NUM_25G;
break;
case HNS3_CFG_SPEED_40G:
- *speed = ETH_SPEED_NUM_40G;
+ *speed = RTE_ETH_SPEED_NUM_40G;
break;
case HNS3_CFG_SPEED_50G:
- *speed = ETH_SPEED_NUM_50G;
+ *speed = RTE_ETH_SPEED_NUM_50G;
break;
case HNS3_CFG_SPEED_100G:
- *speed = ETH_SPEED_NUM_100G;
+ *speed = RTE_ETH_SPEED_NUM_100G;
break;
case HNS3_CFG_SPEED_200G:
- *speed = ETH_SPEED_NUM_200G;
+ *speed = RTE_ETH_SPEED_NUM_200G;
break;
default:
return -EINVAL;
@@ -3610,39 +3610,39 @@ hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
hns3_set_bit(req->speed_dup, HNS3_CFG_DUPLEX_B, !!duplex ? 1 : 0);
switch (speed) {
- case ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_10M:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10M);
break;
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100M);
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_1G);
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10G);
break;
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_25G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_25G);
break;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_40G);
break;
- case ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_50G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_50G);
break;
- case ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_100G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100G);
break;
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_200G:
hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_200G);
break;
@@ -4305,14 +4305,14 @@ hns3_mac_init(struct hns3_hw *hw)
int ret;
pf->support_sfp_query = true;
- mac->link_duplex = ETH_LINK_FULL_DUPLEX;
+ mac->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
ret = hns3_cfg_mac_speed_dup_hw(hw, mac->link_speed, mac->link_duplex);
if (ret) {
PMD_INIT_LOG(ERR, "Config mac speed dup fail ret = %d", ret);
return ret;
}
- mac->link_status = ETH_LINK_DOWN;
+ mac->link_status = RTE_ETH_LINK_DOWN;
return hns3_config_mtu(hw, pf->mps);
}
@@ -4562,7 +4562,7 @@ hns3_dev_promiscuous_enable(struct rte_eth_dev *dev)
* all packets coming in in the receiving direction.
*/
offloads = dev->data->dev_conf.rxmode.offloads;
- if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
ret = hns3_enable_vlan_filter(hns, false);
if (ret) {
hns3_err(hw, "failed to enable promiscuous mode due to "
@@ -4603,7 +4603,7 @@ hns3_dev_promiscuous_disable(struct rte_eth_dev *dev)
}
/* when promiscuous mode was disabled, restore the vlan filter status */
offloads = dev->data->dev_conf.rxmode.offloads;
- if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
ret = hns3_enable_vlan_filter(hns, true);
if (ret) {
hns3_err(hw, "failed to disable promiscuous mode due to"
@@ -4723,8 +4723,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
mac_info->supported_speed =
rte_le_to_cpu_32(resp->supported_speed);
mac_info->support_autoneg = resp->autoneg_ability;
- mac_info->link_autoneg = (resp->autoneg == 0) ? ETH_LINK_FIXED
- : ETH_LINK_AUTONEG;
+ mac_info->link_autoneg = (resp->autoneg == 0) ? RTE_ETH_LINK_FIXED
+ : RTE_ETH_LINK_AUTONEG;
} else {
mac_info->query_type = HNS3_DEFAULT_QUERY;
}
@@ -4735,8 +4735,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
static uint8_t
hns3_check_speed_dup(uint8_t duplex, uint32_t speed)
{
- if (!(speed == ETH_SPEED_NUM_10M || speed == ETH_SPEED_NUM_100M))
- duplex = ETH_LINK_FULL_DUPLEX;
+ if (!(speed == RTE_ETH_SPEED_NUM_10M || speed == RTE_ETH_SPEED_NUM_100M))
+ duplex = RTE_ETH_LINK_FULL_DUPLEX;
return duplex;
}
@@ -4786,7 +4786,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
return ret;
/* Do nothing if no SFP */
- if (mac_info.link_speed == ETH_SPEED_NUM_NONE)
+ if (mac_info.link_speed == RTE_ETH_SPEED_NUM_NONE)
return 0;
/*
@@ -4813,7 +4813,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
/* Config full duplex for SFP */
return hns3_cfg_mac_speed_dup(hw, mac_info.link_speed,
- ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_FULL_DUPLEX);
}
static void
@@ -4932,10 +4932,10 @@ hns3_cfg_mac_mode(struct hns3_hw *hw, bool enable)
hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_B, val);
/*
- * If DEV_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
+ * If RTE_ETH_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
* when receiving frames. Otherwise, CRC will be stripped.
*/
- if (hw->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (hw->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, 0);
else
hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, val);
@@ -4963,7 +4963,7 @@ hns3_get_mac_link_status(struct hns3_hw *hw)
ret = hns3_cmd_send(hw, &desc, 1);
if (ret) {
hns3_err(hw, "get link status cmd failed %d", ret);
- return ETH_LINK_DOWN;
+ return RTE_ETH_LINK_DOWN;
}
req = (struct hns3_link_status_cmd *)desc.data;
@@ -5145,19 +5145,19 @@ hns3_set_firber_default_support_speed(struct hns3_hw *hw)
struct hns3_mac *mac = &hw->mac;
switch (mac->link_speed) {
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
return HNS3_FIBER_LINK_SPEED_1G_BIT;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
return HNS3_FIBER_LINK_SPEED_10G_BIT;
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_25G:
return HNS3_FIBER_LINK_SPEED_25G_BIT;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
return HNS3_FIBER_LINK_SPEED_40G_BIT;
- case ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_50G:
return HNS3_FIBER_LINK_SPEED_50G_BIT;
- case ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_100G:
return HNS3_FIBER_LINK_SPEED_100G_BIT;
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_200G:
return HNS3_FIBER_LINK_SPEED_200G_BIT;
default:
hns3_warn(hw, "invalid speed %u Mbps.", mac->link_speed);
@@ -5395,20 +5395,20 @@ hns3_convert_link_speeds2bitmap_copper(uint32_t link_speeds)
{
uint32_t speed_bit;
- switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_10M:
+ switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_10M:
speed_bit = HNS3_PHY_LINK_SPEED_10M_BIT;
break;
- case ETH_LINK_SPEED_10M_HD:
+ case RTE_ETH_LINK_SPEED_10M_HD:
speed_bit = HNS3_PHY_LINK_SPEED_10M_HD_BIT;
break;
- case ETH_LINK_SPEED_100M:
+ case RTE_ETH_LINK_SPEED_100M:
speed_bit = HNS3_PHY_LINK_SPEED_100M_BIT;
break;
- case ETH_LINK_SPEED_100M_HD:
+ case RTE_ETH_LINK_SPEED_100M_HD:
speed_bit = HNS3_PHY_LINK_SPEED_100M_HD_BIT;
break;
- case ETH_LINK_SPEED_1G:
+ case RTE_ETH_LINK_SPEED_1G:
speed_bit = HNS3_PHY_LINK_SPEED_1000M_BIT;
break;
default:
@@ -5424,26 +5424,26 @@ hns3_convert_link_speeds2bitmap_fiber(uint32_t link_speeds)
{
uint32_t speed_bit;
- switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
- case ETH_LINK_SPEED_1G:
+ switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+ case RTE_ETH_LINK_SPEED_1G:
speed_bit = HNS3_FIBER_LINK_SPEED_1G_BIT;
break;
- case ETH_LINK_SPEED_10G:
+ case RTE_ETH_LINK_SPEED_10G:
speed_bit = HNS3_FIBER_LINK_SPEED_10G_BIT;
break;
- case ETH_LINK_SPEED_25G:
+ case RTE_ETH_LINK_SPEED_25G:
speed_bit = HNS3_FIBER_LINK_SPEED_25G_BIT;
break;
- case ETH_LINK_SPEED_40G:
+ case RTE_ETH_LINK_SPEED_40G:
speed_bit = HNS3_FIBER_LINK_SPEED_40G_BIT;
break;
- case ETH_LINK_SPEED_50G:
+ case RTE_ETH_LINK_SPEED_50G:
speed_bit = HNS3_FIBER_LINK_SPEED_50G_BIT;
break;
- case ETH_LINK_SPEED_100G:
+ case RTE_ETH_LINK_SPEED_100G:
speed_bit = HNS3_FIBER_LINK_SPEED_100G_BIT;
break;
- case ETH_LINK_SPEED_200G:
+ case RTE_ETH_LINK_SPEED_200G:
speed_bit = HNS3_FIBER_LINK_SPEED_200G_BIT;
break;
default:
@@ -5478,28 +5478,28 @@ hns3_check_port_speed(struct hns3_hw *hw, uint32_t link_speeds)
static inline uint32_t
hns3_get_link_speed(uint32_t link_speeds)
{
- uint32_t speed = ETH_SPEED_NUM_NONE;
-
- if (link_speeds & ETH_LINK_SPEED_10M ||
- link_speeds & ETH_LINK_SPEED_10M_HD)
- speed = ETH_SPEED_NUM_10M;
- if (link_speeds & ETH_LINK_SPEED_100M ||
- link_speeds & ETH_LINK_SPEED_100M_HD)
- speed = ETH_SPEED_NUM_100M;
- if (link_speeds & ETH_LINK_SPEED_1G)
- speed = ETH_SPEED_NUM_1G;
- if (link_speeds & ETH_LINK_SPEED_10G)
- speed = ETH_SPEED_NUM_10G;
- if (link_speeds & ETH_LINK_SPEED_25G)
- speed = ETH_SPEED_NUM_25G;
- if (link_speeds & ETH_LINK_SPEED_40G)
- speed = ETH_SPEED_NUM_40G;
- if (link_speeds & ETH_LINK_SPEED_50G)
- speed = ETH_SPEED_NUM_50G;
- if (link_speeds & ETH_LINK_SPEED_100G)
- speed = ETH_SPEED_NUM_100G;
- if (link_speeds & ETH_LINK_SPEED_200G)
- speed = ETH_SPEED_NUM_200G;
+ uint32_t speed = RTE_ETH_SPEED_NUM_NONE;
+
+ if (link_speeds & RTE_ETH_LINK_SPEED_10M ||
+ link_speeds & RTE_ETH_LINK_SPEED_10M_HD)
+ speed = RTE_ETH_SPEED_NUM_10M;
+ if (link_speeds & RTE_ETH_LINK_SPEED_100M ||
+ link_speeds & RTE_ETH_LINK_SPEED_100M_HD)
+ speed = RTE_ETH_SPEED_NUM_100M;
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
+ speed = RTE_ETH_SPEED_NUM_1G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
+ speed = RTE_ETH_SPEED_NUM_10G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_25G)
+ speed = RTE_ETH_SPEED_NUM_25G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_40G)
+ speed = RTE_ETH_SPEED_NUM_40G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_50G)
+ speed = RTE_ETH_SPEED_NUM_50G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_100G)
+ speed = RTE_ETH_SPEED_NUM_100G;
+ if (link_speeds & RTE_ETH_LINK_SPEED_200G)
+ speed = RTE_ETH_SPEED_NUM_200G;
return speed;
}
@@ -5507,11 +5507,11 @@ hns3_get_link_speed(uint32_t link_speeds)
static uint8_t
hns3_get_link_duplex(uint32_t link_speeds)
{
- if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
- (link_speeds & ETH_LINK_SPEED_100M_HD))
- return ETH_LINK_HALF_DUPLEX;
+ if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+ (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+ return RTE_ETH_LINK_HALF_DUPLEX;
else
- return ETH_LINK_FULL_DUPLEX;
+ return RTE_ETH_LINK_FULL_DUPLEX;
}
static int
@@ -5645,9 +5645,9 @@ hns3_apply_link_speed(struct hns3_hw *hw)
struct hns3_set_link_speed_cfg cfg;
memset(&cfg, 0, sizeof(struct hns3_set_link_speed_cfg));
- cfg.autoneg = (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) ?
- ETH_LINK_AUTONEG : ETH_LINK_FIXED;
- if (cfg.autoneg != ETH_LINK_AUTONEG) {
+ cfg.autoneg = (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) ?
+ RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
+ if (cfg.autoneg != RTE_ETH_LINK_AUTONEG) {
cfg.speed = hns3_get_link_speed(conf->link_speeds);
cfg.duplex = hns3_get_link_duplex(conf->link_speeds);
}
@@ -5920,7 +5920,7 @@ hns3_do_stop(struct hns3_adapter *hns)
ret = hns3_cfg_mac_mode(hw, false);
if (ret)
return ret;
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0) {
hns3_configure_all_mac_addr(hns, true);
@@ -6131,17 +6131,17 @@ hns3_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
current_mode = hns3_get_current_fc_mode(dev);
switch (current_mode) {
case HNS3_FC_FULL:
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
case HNS3_FC_TX_PAUSE:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case HNS3_FC_RX_PAUSE:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case HNS3_FC_NONE:
default:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
break;
}
@@ -6287,7 +6287,7 @@ hns3_get_dcb_info(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info)
int i;
rte_spinlock_lock(&hw->lock);
- if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = pf->local_max_tc;
else
dcb_info->nb_tcs = 1;
@@ -6587,7 +6587,7 @@ hns3_stop_service(struct hns3_adapter *hns)
struct rte_eth_dev *eth_dev;
eth_dev = &rte_eth_devices[hw->data->port_id];
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
if (hw->adapter_state == HNS3_NIC_STARTED) {
rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
hns3_update_linkstatus_and_event(hw, false);
@@ -6877,7 +6877,7 @@ get_current_fec_auto_state(struct hns3_hw *hw, uint8_t *state)
* in device of link speed
* below 10 Gbps.
*/
- if (hw->mac.link_speed < ETH_SPEED_NUM_10G) {
+ if (hw->mac.link_speed < RTE_ETH_SPEED_NUM_10G) {
*state = 0;
return 0;
}
@@ -6909,7 +6909,7 @@ hns3_fec_get_internal(struct hns3_hw *hw, uint32_t *fec_capa)
* configured FEC mode is returned.
* If link is up, current FEC mode is returned.
*/
- if (hw->mac.link_status == ETH_LINK_DOWN) {
+ if (hw->mac.link_status == RTE_ETH_LINK_DOWN) {
ret = get_current_fec_auto_state(hw, &auto_state);
if (ret)
return ret;
@@ -7008,12 +7008,12 @@ get_current_speed_fec_cap(struct hns3_hw *hw, struct rte_eth_fec_capa *fec_capa)
uint32_t cur_capa;
switch (mac->link_speed) {
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
cur_capa = fec_capa[1].capa;
break;
- case ETH_SPEED_NUM_25G:
- case ETH_SPEED_NUM_100G:
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_200G:
cur_capa = fec_capa[0].capa;
break;
default:
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 0e4e4269a12f..c40d28af1d46 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -191,10 +191,10 @@ struct hns3_mac {
bool default_addr_setted; /* whether default addr(mac_addr) is set */
uint8_t media_type;
uint8_t phy_addr;
- uint8_t link_duplex : 1; /* ETH_LINK_[HALF/FULL]_DUPLEX */
- uint8_t link_autoneg : 1; /* ETH_LINK_[AUTONEG/FIXED] */
- uint8_t link_status : 1; /* ETH_LINK_[DOWN/UP] */
- uint32_t link_speed; /* ETH_SPEED_NUM_ */
+ uint8_t link_duplex : 1; /* RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+ uint8_t link_autoneg : 1; /* RTE_ETH_LINK_[AUTONEG/FIXED] */
+ uint8_t link_status : 1; /* RTE_ETH_LINK_[DOWN/UP] */
+ uint32_t link_speed; /* RTE_ETH_SPEED_NUM_ */
/*
* Some firmware versions support only the SFP speed query. In addition
* to the SFP speed query, some firmware supports the query of the speed
@@ -1114,9 +1114,9 @@ static inline uint64_t
hns3_txvlan_cap_get(struct hns3_hw *hw)
{
if (hw->port_base_vlan_cfg.state)
- return DEV_TX_OFFLOAD_VLAN_INSERT;
+ return RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
else
- return DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT;
+ return RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
}
#endif /* _HNS3_ETHDEV_H_ */
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 8d9b7979c806..53d79bb2106c 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -809,15 +809,15 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
}
hw->adapter_state = HNS3_NIC_CONFIGURING;
- if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
hns3_err(hw, "setting link speed/duplex not supported");
ret = -EINVAL;
goto cfg_err;
}
/* When RSS is not configured, redirect the packet queue 0 */
- if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
hw->rss_dis_flag = false;
rss_conf = conf->rx_adv_conf.rss_conf;
ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -829,7 +829,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
* If jumbo frames are enabled, MTU needs to be refreshed
* according to the maximum RX packet length.
*/
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
@@ -853,7 +853,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
/* config hardware GRO */
- gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+ gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
ret = hns3_config_gro(hw, gro_en);
if (ret)
goto cfg_err;
@@ -931,10 +931,10 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
@@ -963,33 +963,33 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
- info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_SCTP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_TCP_LRO);
- info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+ info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO);
+ info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
hns3_txvlan_cap_get(hw));
if (hns3_dev_outer_udp_cksum_supported(hw))
- info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
if (hns3_dev_indep_txrx_supported(hw))
info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -1669,10 +1669,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
tmp_mask = (unsigned int)mask;
- if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
rte_spinlock_lock(&hw->lock);
/* Enable or disable VLAN filter */
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ret = hns3vf_en_vlan_filter(hw, true);
else
ret = hns3vf_en_vlan_filter(hw, false);
@@ -1682,10 +1682,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
/* Vlan stripping setting */
- if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+ if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
rte_spinlock_lock(&hw->lock);
/* Enable or disable VLAN stripping */
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
ret = hns3vf_en_hw_strip_rxvtag(hw, true);
else
ret = hns3vf_en_hw_strip_rxvtag(hw, false);
@@ -1753,7 +1753,7 @@ hns3vf_restore_vlan_conf(struct hns3_adapter *hns)
int ret;
dev_conf = &hw->data->dev_conf;
- en = dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP ? true
+ en = dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ? true
: false;
ret = hns3vf_en_hw_strip_rxvtag(hw, en);
if (ret)
@@ -1778,8 +1778,8 @@ hns3vf_dev_configure_vlan(struct rte_eth_dev *dev)
}
/* Apply vlan offload setting */
- ret = hns3vf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK);
+ ret = hns3vf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK);
if (ret)
hns3_err(hw, "dev config vlan offload failed, ret = %d.", ret);
@@ -2088,7 +2088,7 @@ hns3vf_do_stop(struct hns3_adapter *hns)
struct hns3_hw *hw = &hns->hw;
int ret;
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
/*
* The "hns3vf_do_stop" function will also be called by .stop_service to
@@ -2247,31 +2247,31 @@ hns3vf_dev_link_update(struct rte_eth_dev *eth_dev,
memset(&new_link, 0, sizeof(new_link));
switch (mac->link_speed) {
- case ETH_SPEED_NUM_10M:
- case ETH_SPEED_NUM_100M:
- case ETH_SPEED_NUM_1G:
- case ETH_SPEED_NUM_10G:
- case ETH_SPEED_NUM_25G:
- case ETH_SPEED_NUM_40G:
- case ETH_SPEED_NUM_50G:
- case ETH_SPEED_NUM_100G:
- case ETH_SPEED_NUM_200G:
+ case RTE_ETH_SPEED_NUM_10M:
+ case RTE_ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_50G:
+ case RTE_ETH_SPEED_NUM_100G:
+ case RTE_ETH_SPEED_NUM_200G:
if (mac->link_status)
new_link.link_speed = mac->link_speed;
break;
default:
if (mac->link_status)
- new_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
}
if (!mac->link_status)
- new_link.link_speed = ETH_SPEED_NUM_NONE;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
new_link.link_duplex = mac->link_duplex;
- new_link.link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+ new_link.link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
new_link.link_autoneg =
- !(eth_dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED);
+ !(eth_dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED);
return rte_eth_linkstatus_set(eth_dev, &new_link);
}
@@ -2599,11 +2599,11 @@ hns3vf_stop_service(struct hns3_adapter *hns)
* Make sure call update link status before hns3vf_stop_poll_job
* because update link status depend on polling job exist.
*/
- hns3vf_update_link_status(hw, ETH_LINK_DOWN, hw->mac.link_speed,
+ hns3vf_update_link_status(hw, RTE_ETH_LINK_DOWN, hw->mac.link_speed,
hw->mac.link_duplex);
hns3vf_stop_poll_job(eth_dev);
}
- hw->mac.link_status = ETH_LINK_DOWN;
+ hw->mac.link_status = RTE_ETH_LINK_DOWN;
hns3_set_rxtx_function(eth_dev);
rte_wmb();
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index fc77979c5f14..0ac8705b590b 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1298,10 +1298,10 @@ hns3_rss_input_tuple_supported(struct hns3_hw *hw,
* Kunpeng930 and future kunpeng series support to use src/dst port
* fields to RSS hash for IPv6 SCTP packet type.
*/
- if (rss->types & (ETH_RSS_L4_DST_ONLY | ETH_RSS_L4_SRC_ONLY) &&
- (rss->types & ETH_RSS_IP ||
+ if (rss->types & (RTE_ETH_RSS_L4_DST_ONLY | RTE_ETH_RSS_L4_SRC_ONLY) &&
+ (rss->types & RTE_ETH_RSS_IP ||
(!hw->rss_info.ipv6_sctp_offload_supported &&
- rss->types & ETH_RSS_NONFRAG_IPV6_SCTP)))
+ rss->types & RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
return false;
return true;
diff --git a/drivers/net/hns3/hns3_ptp.c b/drivers/net/hns3/hns3_ptp.c
index df8485904688..395590c86c03 100644
--- a/drivers/net/hns3/hns3_ptp.c
+++ b/drivers/net/hns3/hns3_ptp.c
@@ -21,7 +21,7 @@ hns3_mbuf_dyn_rx_timestamp_register(struct rte_eth_dev *dev,
struct hns3_hw *hw = &hns->hw;
int ret;
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
return 0;
ret = rte_mbuf_dyn_rx_timestamp_register
--git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 3a81e90e0911..2c5661567945 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -76,69 +76,69 @@ static const struct {
uint64_t rss_types;
uint64_t rss_field;
} hns3_set_tuple_table[] = {
- { ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
- { ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) },
- { ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) },
- { ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) },
- { ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) },
- { ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
- { ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) },
- { ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) },
- { ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) },
- { ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) },
- { ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) },
};
@@ -146,44 +146,44 @@ static const struct {
uint64_t rss_types;
uint64_t rss_field;
} hns3_set_rss_types[] = {
- { ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
+ { RTE_ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
- { ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER) },
- { ETH_RSS_NONFRAG_IPV4_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
- { ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
+ { RTE_ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
- { ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
- { ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
- { ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) |
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) |
BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER) },
- { ETH_RSS_NONFRAG_IPV6_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) |
BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }
};
@@ -365,10 +365,10 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
* When user does not specify the following types or a combination of
* the following types, it enables all fields for the supported RSS
* types. the following types as:
- * - ETH_RSS_L3_SRC_ONLY
- * - ETH_RSS_L3_DST_ONLY
- * - ETH_RSS_L4_SRC_ONLY
- * - ETH_RSS_L4_DST_ONLY
+ * - RTE_ETH_RSS_L3_SRC_ONLY
+ * - RTE_ETH_RSS_L3_DST_ONLY
+ * - RTE_ETH_RSS_L4_SRC_ONLY
+ * - RTE_ETH_RSS_L4_DST_ONLY
*/
if (fields_count == 0) {
for (i = 0; i < RTE_DIM(hns3_set_rss_types); i++) {
@@ -692,7 +692,7 @@ hns3_config_rss(struct hns3_adapter *hns)
}
/* When RSS is off, redirect the packet queue 0 */
- if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) == 0)
+ if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0)
hns3_rss_uninit(hns);
/* Configure RSS hash algorithm and hash key offset */
@@ -709,7 +709,7 @@ hns3_config_rss(struct hns3_adapter *hns)
* When RSS is off, it doesn't need to configure rss redirection table
* to hardware.
*/
- if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+ if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
ret = hns3_set_rss_indir_table(hw, rss_cfg->rss_indirection_tbl,
hw->rss_ind_tbl_size);
if (ret)
@@ -723,7 +723,7 @@ hns3_config_rss(struct hns3_adapter *hns)
return ret;
rss_indir_table_uninit:
- if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+ if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
ret1 = hns3_rss_reset_indir_table(hw);
if (ret1 != 0)
return ret;
--git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h
index 996083b88b25..6f153a1b7bfb 100644
--- a/drivers/net/hns3/hns3_rss.h
+++ b/drivers/net/hns3/hns3_rss.h
@@ -8,20 +8,20 @@
#include <rte_flow.h>
#define HNS3_ETH_RSS_SUPPORT ( \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY)
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY)
#define HNS3_RSS_IND_TBL_SIZE 512 /* The size of hash lookup table */
#define HNS3_RSS_IND_TBL_SIZE_MAX 2048
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 0f222b37f9d1..01e43791572b 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1912,7 +1912,7 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
memset(&rxq->dfx_stats, 0, sizeof(struct hns3_rx_dfx_stats));
/* CRC len set here is used for amending packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -1957,7 +1957,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
rxq->rx_buf_len);
}
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len)
dev->data->scattered_rx = true;
}
@@ -2833,7 +2833,7 @@ hns3_get_rx_function(struct rte_eth_dev *dev)
vec_allowed = vec_support && hns3_get_default_vec_support();
sve_allowed = vec_support && hns3_get_sve_support();
simple_allowed = !dev->data->scattered_rx &&
- (offloads & DEV_RX_OFFLOAD_TCP_LRO) == 0;
+ (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) == 0;
if (hns->rx_func_hint == HNS3_IO_FUNC_HINT_VEC && vec_allowed)
return hns3_recv_pkts_vec;
@@ -3127,7 +3127,7 @@ hns3_restore_gro_conf(struct hns3_hw *hw)
int ret;
offloads = hw->data->dev_conf.rxmode.offloads;
- gro_en = offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+ gro_en = offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
ret = hns3_config_gro(hw, gro_en);
if (ret)
hns3_err(hw, "restore hardware GRO to %s failed, ret = %d",
@@ -4279,7 +4279,7 @@ hns3_tx_check_simple_support(struct rte_eth_dev *dev)
if (hns3_dev_ptp_supported(hw))
return false;
- return (offloads == (offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE));
+ return (offloads == (offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE));
}
static bool
@@ -4291,16 +4291,16 @@ hns3_get_tx_prep_needed(struct rte_eth_dev *dev)
return true;
#else
#define HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK (\
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | \
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO)
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)
uint64_t tx_offload = dev->data->dev_conf.txmode.offloads;
if (tx_offload & HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index cd7c21c1d0c8..2fa3a01dd3bf 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -307,7 +307,7 @@ struct hns3_rx_queue {
uint16_t rx_rearm_start; /* index of BD that driver re-arming from */
uint16_t rx_rearm_nb; /* number of remaining BDs to be re-armed */
- /* 4 if DEV_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
+ /* 4 if RTE_ETH_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
uint8_t crc_len;
/*
diff --git a/drivers/net/hns3/hns3_rxtx_vec.c b/drivers/net/hns3/hns3_rxtx_vec.c
index 844512f6ceec..d01a8d62bfb1 100644
--- a/drivers/net/hns3/hns3_rxtx_vec.c
+++ b/drivers/net/hns3/hns3_rxtx_vec.c
@@ -22,8 +22,8 @@ hns3_tx_check_vec_support(struct rte_eth_dev *dev)
if (hns3_dev_ptp_supported(hw))
return -ENOTSUP;
- /* Only support DEV_TX_OFFLOAD_MBUF_FAST_FREE */
- if (txmode->offloads != DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ /* Only support RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE */
+ if (txmode->offloads != RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
return -ENOTSUP;
return 0;
@@ -228,10 +228,10 @@ hns3_rxq_vec_check(struct hns3_rx_queue *rxq, void *arg)
int
hns3_rx_check_vec_support(struct rte_eth_dev *dev)
{
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- uint64_t offloads_mask = DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_VLAN;
+ uint64_t offloads_mask = RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_VLAN;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (hns3_dev_ptp_supported(hw))
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed17a..0d9ebf208614 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1641,7 +1641,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
/* Set the global registers with default ether type value */
if (!pf->support_multi_driver) {
- ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+ ret = i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
RTE_ETHER_TYPE_VLAN);
if (ret != I40E_SUCCESS) {
PMD_INIT_LOG(ERR,
@@ -1909,8 +1909,8 @@ i40e_dev_configure(struct rte_eth_dev *dev)
ad->tx_simple_allowed = true;
ad->tx_vec_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Only legacy filter API needs the following fdir config. So when the
* legacy filter API is deprecated, the following codes should also be
@@ -1944,13 +1944,13 @@ i40e_dev_configure(struct rte_eth_dev *dev)
* number, which will be available after rx_queue_setup(). dev_start()
* function is good to place RSS setup.
*/
- if (mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+ if (mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) {
ret = i40e_vmdq_setup(dev);
if (ret)
goto err;
}
- if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ if (mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
ret = i40e_dcb_setup(dev);
if (ret) {
PMD_DRV_LOG(ERR, "failed to configure DCB.");
@@ -2227,17 +2227,17 @@ i40e_parse_link_speeds(uint16_t link_speeds)
{
uint8_t link_speed = I40E_LINK_SPEED_UNKNOWN;
- if (link_speeds & ETH_LINK_SPEED_40G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_40G)
link_speed |= I40E_LINK_SPEED_40GB;
- if (link_speeds & ETH_LINK_SPEED_25G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_25G)
link_speed |= I40E_LINK_SPEED_25GB;
- if (link_speeds & ETH_LINK_SPEED_20G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_20G)
link_speed |= I40E_LINK_SPEED_20GB;
- if (link_speeds & ETH_LINK_SPEED_10G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
link_speed |= I40E_LINK_SPEED_10GB;
- if (link_speeds & ETH_LINK_SPEED_1G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
link_speed |= I40E_LINK_SPEED_1GB;
- if (link_speeds & ETH_LINK_SPEED_100M)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100M)
link_speed |= I40E_LINK_SPEED_100MB;
return link_speed;
@@ -2345,13 +2345,13 @@ i40e_apply_link_speed(struct rte_eth_dev *dev)
abilities |= I40E_AQ_PHY_ENABLE_ATOMIC_LINK |
I40E_AQ_PHY_LINK_ENABLED;
- if (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
- conf->link_speeds = ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_20G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_100M;
+ if (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
+ conf->link_speeds = RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_20G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_100M;
abilities |= I40E_AQ_PHY_AN_ENABLED;
} else {
@@ -2910,34 +2910,34 @@ update_link_reg(struct i40e_hw *hw, struct rte_eth_link *link)
/* Parse the link status */
switch (link_speed) {
case I40E_REG_SPEED_0:
- link->link_speed = ETH_SPEED_NUM_100M;
+ link->link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case I40E_REG_SPEED_1:
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case I40E_REG_SPEED_2:
if (hw->mac.type == I40E_MAC_X722)
- link->link_speed = ETH_SPEED_NUM_2_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
else
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case I40E_REG_SPEED_3:
if (hw->mac.type == I40E_MAC_X722) {
- link->link_speed = ETH_SPEED_NUM_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_5G;
} else {
reg_val = I40E_READ_REG(hw, I40E_PRTMAC_MACC);
if (reg_val & I40E_REG_MACC_25GB)
- link->link_speed = ETH_SPEED_NUM_25G;
+ link->link_speed = RTE_ETH_SPEED_NUM_25G;
else
- link->link_speed = ETH_SPEED_NUM_40G;
+ link->link_speed = RTE_ETH_SPEED_NUM_40G;
}
break;
case I40E_REG_SPEED_4:
if (hw->mac.type == I40E_MAC_X722)
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
else
- link->link_speed = ETH_SPEED_NUM_20G;
+ link->link_speed = RTE_ETH_SPEED_NUM_20G;
break;
default:
PMD_DRV_LOG(ERR, "Unknown link speed info %u", link_speed);
@@ -2964,8 +2964,8 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
status = i40e_aq_get_link_info(hw, enable_lse,
&link_status, NULL);
if (unlikely(status != I40E_SUCCESS)) {
- link->link_speed = ETH_SPEED_NUM_NONE;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR, "Failed to get link info");
return;
}
@@ -2980,28 +2980,28 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
/* Parse the link status */
switch (link_status.link_speed) {
case I40E_LINK_SPEED_100MB:
- link->link_speed = ETH_SPEED_NUM_100M;
+ link->link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case I40E_LINK_SPEED_1GB:
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case I40E_LINK_SPEED_10GB:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case I40E_LINK_SPEED_20GB:
- link->link_speed = ETH_SPEED_NUM_20G;
+ link->link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case I40E_LINK_SPEED_25GB:
- link->link_speed = ETH_SPEED_NUM_25G;
+ link->link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case I40E_LINK_SPEED_40GB:
- link->link_speed = ETH_SPEED_NUM_40G;
+ link->link_speed = RTE_ETH_SPEED_NUM_40G;
break;
default:
if (link->link_status)
- link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
else
- link->link_speed = ETH_SPEED_NUM_NONE;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
}
@@ -3018,9 +3018,9 @@ i40e_dev_link_update(struct rte_eth_dev *dev,
memset(&link, 0, sizeof(link));
/* i40e uses full duplex only */
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
if (!wait_to_complete && !enable_lse)
update_link_reg(hw, &link);
@@ -3748,34 +3748,34 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
dev_info->rx_queue_offload_capa = 0;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_RSS_HASH;
-
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
dev_info->tx_queue_offload_capa;
dev_info->dev_capa =
RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -3834,7 +3834,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
if (I40E_PHY_TYPE_SUPPORT_40G(hw->phy.phy_types)) {
/* For XL710 */
- dev_info->speed_capa = ETH_LINK_SPEED_40G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_40G;
dev_info->default_rxportconf.nb_queues = 2;
dev_info->default_txportconf.nb_queues = 2;
if (dev->data->nb_rx_queues == 1)
@@ -3848,17 +3848,17 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
} else if (I40E_PHY_TYPE_SUPPORT_25G(hw->phy.phy_types)) {
/* For XXV710 */
- dev_info->speed_capa = ETH_LINK_SPEED_25G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_25G;
dev_info->default_rxportconf.nb_queues = 1;
dev_info->default_txportconf.nb_queues = 1;
dev_info->default_rxportconf.ring_size = 256;
dev_info->default_txportconf.ring_size = 256;
} else {
/* For X710 */
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
dev_info->default_rxportconf.nb_queues = 1;
dev_info->default_txportconf.nb_queues = 1;
- if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_10G) {
+ if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_10G) {
dev_info->default_rxportconf.ring_size = 512;
dev_info->default_txportconf.ring_size = 256;
} else {
@@ -3897,7 +3897,7 @@ i40e_vlan_tpid_set_by_registers(struct rte_eth_dev *dev,
int ret;
if (qinq) {
- if (vlan_type == ETH_VLAN_TYPE_OUTER)
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
reg_id = 2;
}
@@ -3944,12 +3944,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
int qinq = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_EXTEND;
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
int ret = 0;
- if ((vlan_type != ETH_VLAN_TYPE_INNER &&
- vlan_type != ETH_VLAN_TYPE_OUTER) ||
- (!qinq && vlan_type == ETH_VLAN_TYPE_INNER)) {
+ if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+ vlan_type != RTE_ETH_VLAN_TYPE_OUTER) ||
+ (!qinq && vlan_type == RTE_ETH_VLAN_TYPE_INNER)) {
PMD_DRV_LOG(ERR,
"Unsupported vlan type.");
return -EINVAL;
@@ -3963,12 +3963,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
/* 802.1ad frames ability is added in NVM API 1.7*/
if (hw->flags & I40E_HW_FLAG_802_1AD_CAPABLE) {
if (qinq) {
- if (vlan_type == ETH_VLAN_TYPE_OUTER)
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
hw->first_tag = rte_cpu_to_le_16(tpid);
- else if (vlan_type == ETH_VLAN_TYPE_INNER)
+ else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER)
hw->second_tag = rte_cpu_to_le_16(tpid);
} else {
- if (vlan_type == ETH_VLAN_TYPE_OUTER)
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
hw->second_tag = rte_cpu_to_le_16(tpid);
}
ret = i40e_aq_set_switch_config(hw, 0, 0, 0, NULL);
@@ -4027,37 +4027,37 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
i40e_vsi_config_vlan_filter(vsi, TRUE);
else
i40e_vsi_config_vlan_filter(vsi, FALSE);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
i40e_vsi_config_vlan_stripping(vsi, TRUE);
else
i40e_vsi_config_vlan_stripping(vsi, FALSE);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
i40e_vsi_config_double_vlan(vsi, TRUE);
/* Set global registers with default ethertype. */
- i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+ i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
RTE_ETHER_TYPE_VLAN);
- i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+ i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
RTE_ETHER_TYPE_VLAN);
}
else
i40e_vsi_config_double_vlan(vsi, FALSE);
}
- if (mask & ETH_QINQ_STRIP_MASK) {
+ if (mask & RTE_ETH_QINQ_STRIP_MASK) {
/* Enable or disable outer VLAN stripping */
- if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
i40e_vsi_config_outer_vlan_stripping(vsi, TRUE);
else
i40e_vsi_config_outer_vlan_stripping(vsi, FALSE);
@@ -4140,17 +4140,17 @@ i40e_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
/* Return current mode according to actual setting*/
switch (hw->fc.current_mode) {
case I40E_FC_FULL:
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
case I40E_FC_TX_PAUSE:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case I40E_FC_RX_PAUSE:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case I40E_FC_NONE:
default:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
};
return 0;
@@ -4166,10 +4166,10 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
struct i40e_hw *hw;
struct i40e_pf *pf;
enum i40e_fc_mode rte_fcmode_2_i40e_fcmode[] = {
- [RTE_FC_NONE] = I40E_FC_NONE,
- [RTE_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
- [RTE_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
- [RTE_FC_FULL] = I40E_FC_FULL
+ [RTE_ETH_FC_NONE] = I40E_FC_NONE,
+ [RTE_ETH_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
+ [RTE_ETH_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
+ [RTE_ETH_FC_FULL] = I40E_FC_FULL
};
/* high_water field in the rte_eth_fc_conf using the kilobytes unit */
@@ -4316,7 +4316,7 @@ i40e_macaddr_add(struct rte_eth_dev *dev,
}
rte_memcpy(&mac_filter.mac_addr, mac_addr, RTE_ETHER_ADDR_LEN);
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
mac_filter.filter_type = I40E_MACVLAN_PERFECT_MATCH;
else
mac_filter.filter_type = I40E_MAC_PERFECT_MATCH;
@@ -4469,7 +4469,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
int ret;
if (reta_size != lut_size ||
- reta_size > ETH_RSS_RETA_SIZE_512) {
+ reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
PMD_DRV_LOG(ERR,
"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
reta_size, lut_size);
@@ -4512,7 +4512,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
int ret;
if (reta_size != lut_size ||
- reta_size > ETH_RSS_RETA_SIZE_512) {
+ reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
PMD_DRV_LOG(ERR,
"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
reta_size, lut_size);
@@ -4847,7 +4847,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
hw->func_caps.num_vsis - vsi_count);
pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
- ETH_64_POOLS);
+ RTE_ETH_64_POOLS);
if (pf->max_nb_vmdq_vsi) {
pf->flags |= I40E_FLAG_VMDQ;
pf->vmdq_nb_qps = pf->vmdq_nb_qp_max;
@@ -6132,10 +6132,10 @@ i40e_dev_init_vlan(struct rte_eth_dev *dev)
int mask = 0;
/* Apply vlan offload setting */
- mask = ETH_VLAN_STRIP_MASK |
- ETH_QINQ_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_QINQ_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = i40e_vlan_offload_set(dev, mask);
if (ret) {
PMD_DRV_LOG(INFO, "Failed to update vlan offload");
@@ -6262,9 +6262,9 @@ i40e_pf_setup(struct i40e_pf *pf)
/* Configure filter control */
memset(&settings, 0, sizeof(settings));
- if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_128)
+ if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_128)
settings.hash_lut_size = I40E_HASH_LUT_SIZE_128;
- else if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_512)
+ else if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_512)
settings.hash_lut_size = I40E_HASH_LUT_SIZE_512;
else {
PMD_DRV_LOG(ERR, "Hash lookup table size (%u) not supported",
@@ -7117,7 +7117,7 @@ i40e_find_vlan_filter(struct i40e_vsi *vsi,
{
uint32_t vid_idx, vid_bit;
- if (vlan_id > ETH_VLAN_ID_MAX)
+ if (vlan_id > RTE_ETH_VLAN_ID_MAX)
return 0;
vid_idx = I40E_VFTA_IDX(vlan_id);
@@ -7152,7 +7152,7 @@ i40e_set_vlan_filter(struct i40e_vsi *vsi,
struct i40e_aqc_add_remove_vlan_element_data vlan_data = {0};
int ret;
- if (vlan_id > ETH_VLAN_ID_MAX)
+ if (vlan_id > RTE_ETH_VLAN_ID_MAX)
return;
i40e_store_vlan_filter(vsi, vlan_id, on);
@@ -8730,16 +8730,16 @@ i40e_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
I40E_AQC_TUNNEL_TYPE_VXLAN);
break;
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
I40E_AQC_TUNNEL_TYPE_VXLAN_GPE);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -1;
break;
@@ -8765,12 +8765,12 @@ i40e_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
ret = i40e_del_vxlan_port(pf, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -1;
break;
@@ -8862,7 +8862,7 @@ int
i40e_pf_reset_rss_reta(struct i40e_pf *pf)
{
struct i40e_hw *hw = &pf->adapter->hw;
- uint8_t lut[ETH_RSS_RETA_SIZE_512];
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i;
int num;
@@ -8870,7 +8870,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
* configured. It's necessary to calculate the actual PF
* queues that are configured.
*/
- if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
num = i40e_pf_calc_configured_queues_num(pf);
else
num = pf->dev_data->nb_rx_queues;
@@ -8949,7 +8949,7 @@ i40e_pf_config_rss(struct i40e_pf *pf)
rss_hf = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode;
if (!(rss_hf & pf->adapter->flow_types_mask) ||
- !(mq_mode & ETH_MQ_RX_RSS_FLAG))
+ !(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
return 0;
hw = I40E_PF_TO_HW(pf);
@@ -10412,8 +10412,8 @@ i40e_mirror_rule_set(struct rte_eth_dev *dev,
return I40E_ERR_NO_MEMORY;
}
switch (mirror_conf->rule_type) {
- case ETH_MIRROR_VLAN:
- for (i = 0, j = 0; i < ETH_MIRROR_MAX_VLANS; i++) {
+ case RTE_ETH_MIRROR_VLAN:
+ for (i = 0, j = 0; i < RTE_ETH_MIRROR_MAX_VLANS; i++) {
if (mirror_conf->vlan.vlan_mask & (1ULL << i)) {
mirr_rule->entries[j] =
mirror_conf->vlan.vlan_id[i];
@@ -10427,8 +10427,8 @@ i40e_mirror_rule_set(struct rte_eth_dev *dev,
}
mirr_rule->rule_type = I40E_AQC_MIRROR_RULE_TYPE_VLAN;
break;
- case ETH_MIRROR_VIRTUAL_POOL_UP:
- case ETH_MIRROR_VIRTUAL_POOL_DOWN:
+ case RTE_ETH_MIRROR_VIRTUAL_POOL_UP:
+ case RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN:
/* check if the specified pool bit is out of range */
if (mirror_conf->pool_mask > (uint64_t)(1ULL << (pf->vf_num + 1))) {
PMD_DRV_LOG(ERR, "pool mask is out of range.");
@@ -10453,15 +10453,15 @@ i40e_mirror_rule_set(struct rte_eth_dev *dev,
}
/* egress and ingress in aq commands means from switch but not port */
mirr_rule->rule_type =
- (mirror_conf->rule_type == ETH_MIRROR_VIRTUAL_POOL_UP) ?
+ (mirror_conf->rule_type == RTE_ETH_MIRROR_VIRTUAL_POOL_UP) ?
I40E_AQC_MIRROR_RULE_TYPE_VPORT_EGRESS :
I40E_AQC_MIRROR_RULE_TYPE_VPORT_INGRESS;
break;
- case ETH_MIRROR_UPLINK_PORT:
+ case RTE_ETH_MIRROR_UPLINK_PORT:
/* egress and ingress in aq commands means from switch but not port*/
mirr_rule->rule_type = I40E_AQC_MIRROR_RULE_TYPE_ALL_EGRESS;
break;
- case ETH_MIRROR_DOWNLINK_PORT:
+ case RTE_ETH_MIRROR_DOWNLINK_PORT:
mirr_rule->rule_type = I40E_AQC_MIRROR_RULE_TYPE_ALL_INGRESS;
break;
default:
@@ -10603,16 +10603,16 @@ i40e_start_timecounters(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
switch (link.link_speed) {
- case ETH_SPEED_NUM_40G:
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_25G:
tsync_inc_l = I40E_PTP_40GB_INCVAL & 0xFFFFFFFF;
tsync_inc_h = I40E_PTP_40GB_INCVAL >> 32;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
tsync_inc_l = I40E_PTP_10GB_INCVAL & 0xFFFFFFFF;
tsync_inc_h = I40E_PTP_10GB_INCVAL >> 32;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
tsync_inc_l = I40E_PTP_1GB_INCVAL & 0xFFFFFFFF;
tsync_inc_h = I40E_PTP_1GB_INCVAL >> 32;
break;
@@ -10840,7 +10840,7 @@ i40e_parse_dcb_configure(struct rte_eth_dev *dev,
else
*tc_map = RTE_LEN2MASK(dcb_rx_conf->nb_tcs, uint8_t);
- if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
dcb_cfg->pfc.willing = 0;
dcb_cfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS;
dcb_cfg->pfc.pfcenable = *tc_map;
@@ -11348,7 +11348,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
uint16_t bsf, tc_mapping;
int i, j = 0;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = rte_bsf32(vsi->enabled_tc + 1);
else
dcb_info->nb_tcs = 1;
@@ -11396,7 +11396,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
dcb_info->tc_queue.tc_rxq[j][i].nb_queue;
}
j++;
- } while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, ETH_MAX_VMDQ_POOL));
+ } while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, RTE_ETH_MAX_VMDQ_POOL));
return 0;
}
@@ -11774,10 +11774,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (frame_size > I40E_ETH_MAX_LEN)
dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index cd6deabd60b3..f21c2de6bdb9 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -139,17 +139,17 @@ enum i40e_flxpld_layer_idx {
I40E_FLAG_RSS_AQ_CAPABLE)
#define I40E_RSS_OFFLOAD_ALL ( \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L2_PAYLOAD)
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L2_PAYLOAD)
/* All bits of RSS hash enable for X722*/
#define I40E_RSS_HENA_ALL_X722 ( \
@@ -1076,7 +1076,7 @@ struct i40e_rte_flow_rss_conf {
uint8_t key[(I40E_VFQF_HKEY_MAX_INDEX > I40E_PFQF_HKEY_MAX_INDEX ?
I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t)]; /**< Hash key. */
- uint16_t queue[ETH_RSS_RETA_SIZE_512]; /**< Queues indices to use. */
+ uint16_t queue[RTE_ETH_RSS_RETA_SIZE_512]; /**< Queues indices to use. */
bool symmetric_enable; /**< true, if enable symmetric */
uint64_t config_pctypes; /**< All PCTYPES with the flow */
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 0cfe13b7b227..192e7234909f 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1077,7 +1077,7 @@ i40evf_add_vlan(struct rte_eth_dev *dev, uint16_t vlanid)
* VLAN_STRIP by default. So reconfigure the vlan_offload
* as it was done by the app earlier.
*/
- err = i40evf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK);
+ err = i40evf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK);
if (err)
PMD_DRV_LOG(ERR, "fail to set vlan_strip");
@@ -1403,28 +1403,28 @@ i40evf_handle_pf_event(struct rte_eth_dev *dev, uint8_t *msg,
pf_msg->event_data.link_event_adv.link_status;
switch (pf_msg->event_data.link_event_adv.link_speed) {
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
vf->link_speed = VIRTCHNL_LINK_SPEED_100MB;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
vf->link_speed = VIRTCHNL_LINK_SPEED_1GB;
break;
- case ETH_SPEED_NUM_2_5G:
+ case RTE_ETH_SPEED_NUM_2_5G:
vf->link_speed = VIRTCHNL_LINK_SPEED_2_5GB;
break;
- case ETH_SPEED_NUM_5G:
+ case RTE_ETH_SPEED_NUM_5G:
vf->link_speed = VIRTCHNL_LINK_SPEED_5GB;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
vf->link_speed = VIRTCHNL_LINK_SPEED_10GB;
break;
- case ETH_SPEED_NUM_20G:
+ case RTE_ETH_SPEED_NUM_20G:
vf->link_speed = VIRTCHNL_LINK_SPEED_20GB;
break;
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_25G:
vf->link_speed = VIRTCHNL_LINK_SPEED_25GB;
break;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
vf->link_speed = VIRTCHNL_LINK_SPEED_40GB;
break;
default:
@@ -1770,7 +1770,7 @@ static int
i40evf_init_vlan(struct rte_eth_dev *dev)
{
/* Apply vlan offload setting */
- i40evf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK);
+ i40evf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK);
return 0;
}
@@ -1785,9 +1785,9 @@ i40evf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return -ENOTSUP;
/* Vlan stripping setting */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
i40evf_enable_vlan_strip(dev);
else
i40evf_disable_vlan_strip(dev);
@@ -1933,7 +1933,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct i40e_rx_queue *rxq)
/**
* Check if the jumbo frame and maximum packet length are set correctly
*/
- if (dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -1954,7 +1954,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct i40e_rx_queue *rxq)
}
}
- if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
rxq->max_pkt_len > buf_size)
dev_data->scattered_rx = 1;
@@ -2290,35 +2290,35 @@ i40evf_dev_link_update(struct rte_eth_dev *dev,
/* Linux driver PF host */
switch (vf->link_speed) {
case I40E_LINK_SPEED_100MB:
- new_link.link_speed = ETH_SPEED_NUM_100M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case I40E_LINK_SPEED_1GB:
- new_link.link_speed = ETH_SPEED_NUM_1G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case I40E_LINK_SPEED_10GB:
- new_link.link_speed = ETH_SPEED_NUM_10G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case I40E_LINK_SPEED_20GB:
- new_link.link_speed = ETH_SPEED_NUM_20G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case I40E_LINK_SPEED_25GB:
- new_link.link_speed = ETH_SPEED_NUM_25G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case I40E_LINK_SPEED_40GB:
- new_link.link_speed = ETH_SPEED_NUM_40G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
default:
if (vf->link_up)
- new_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
else
- new_link.link_speed = ETH_SPEED_NUM_NONE;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
/* full duplex only */
- new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
- new_link.link_status = vf->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ new_link.link_status = vf->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
new_link.link_autoneg =
- !(dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED);
+ !(dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED);
return rte_eth_linkstatus_set(dev, &new_link);
}
@@ -2367,36 +2367,36 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mtu = dev_info->max_rx_pktlen - I40E_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
dev_info->hash_key_size = (I40E_VFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_64;
dev_info->flow_type_rss_offloads = vf->adapter->flow_types_mask;
dev_info->max_mac_addrs = I40E_NUM_MACADDR_MAX;
dev_info->rx_queue_offload_capa = 0;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_queue_offload_capa = 0;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -2596,10 +2596,10 @@ i40evf_dev_rss_reta_update(struct rte_eth_dev *dev,
uint16_t i, idx, shift;
int ret;
- if (reta_size != ETH_RSS_RETA_SIZE_64) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_64) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number of hardware can "
- "support (%d)", reta_size, ETH_RSS_RETA_SIZE_64);
+ "support (%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_64);
return -EINVAL;
}
@@ -2635,10 +2635,10 @@ i40evf_dev_rss_reta_query(struct rte_eth_dev *dev,
uint8_t *lut;
int ret;
- if (reta_size != ETH_RSS_RETA_SIZE_64) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_64) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number of hardware can "
- "support (%d)", reta_size, ETH_RSS_RETA_SIZE_64);
+ "support (%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_64);
return -EINVAL;
}
@@ -2770,7 +2770,7 @@ i40evf_config_rss(struct i40e_vf *vf)
uint8_t *lut_info;
int ret;
- if (vf->dev_data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (vf->dev_data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
i40evf_disable_rss(vf);
PMD_DRV_LOG(DEBUG, "RSS not configured");
return 0;
@@ -2887,10 +2887,10 @@ i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (frame_size > I40E_ETH_MAX_LEN)
dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
return ret;
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 3c1570bd9c47..d1cb992be61d 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2015,7 +2015,7 @@ i40e_get_outer_vlan(struct rte_eth_dev *dev)
{
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
int qinq = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_EXTEND;
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
uint64_t reg_r = 0;
uint16_t reg_id;
uint16_t tpid;
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 1fb8c9abfcc6..3755d4d3fe2a 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -102,47 +102,47 @@ struct i40e_hash_map_rss_inset {
const struct i40e_hash_map_rss_inset i40e_hash_rss_inset[] = {
/* IPv4 */
- { ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
- { ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+ { RTE_ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+ { RTE_ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
- { ETH_RSS_NONFRAG_IPV4_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
- { ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
/* IPv6 */
- { ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
- { ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+ { RTE_ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+ { RTE_ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
- { ETH_RSS_NONFRAG_IPV6_OTHER,
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
- { ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
- { ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
/* Port */
- { ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
+ { RTE_ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
/* Ether */
- { ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
- { ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
+ { RTE_ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
+ { RTE_ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
/* VLAN */
- { ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
- { ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
+ { RTE_ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
+ { RTE_ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
};
#define I40E_HASH_VOID_NEXT_ALLOW BIT_ULL(RTE_FLOW_ITEM_TYPE_ETH)
@@ -201,30 +201,30 @@ struct i40e_hash_match_pattern {
#define I40E_HASH_MAP_CUS_PATTERN(pattern, rss_mask, cus_pctype) { \
pattern, rss_mask, true, cus_pctype }
-#define I40E_HASH_L2_RSS_MASK (ETH_RSS_VLAN | ETH_RSS_ETH | \
- ETH_RSS_L2_SRC_ONLY | \
- ETH_RSS_L2_DST_ONLY)
+#define I40E_HASH_L2_RSS_MASK (RTE_ETH_RSS_VLAN | RTE_ETH_RSS_ETH | \
+ RTE_ETH_RSS_L2_SRC_ONLY | \
+ RTE_ETH_RSS_L2_DST_ONLY)
#define I40E_HASH_L23_RSS_MASK (I40E_HASH_L2_RSS_MASK | \
- ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY)
+ RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY)
-#define I40E_HASH_IPV4_L23_RSS_MASK (ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
-#define I40E_HASH_IPV6_L23_RSS_MASK (ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV4_L23_RSS_MASK (RTE_ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV6_L23_RSS_MASK (RTE_ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
#define I40E_HASH_L234_RSS_MASK (I40E_HASH_L23_RSS_MASK | \
- ETH_RSS_PORT | ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY)
+ RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY)
-#define I40E_HASH_IPV4_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV4)
-#define I40E_HASH_IPV6_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV6)
+#define I40E_HASH_IPV4_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV4)
+#define I40E_HASH_IPV6_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV6)
-#define I40E_HASH_L4_TYPES (ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define I40E_HASH_L4_TYPES (RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
/* Current supported patterns and RSS types.
* All items that have the same pattern types are together.
@@ -232,68 +232,68 @@ struct i40e_hash_match_pattern {
static const struct i40e_hash_match_pattern match_patterns[] = {
/* Ether */
I40E_HASH_MAP_PATTERN(I40E_PHINT_ETH,
- ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
+ RTE_ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
I40E_FILTER_PCTYPE_L2_PAYLOAD),
/* IPv4 */
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
- ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
+ RTE_ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
I40E_FILTER_PCTYPE_FRAG_IPV4),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
- ETH_RSS_NONFRAG_IPV4_OTHER |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
I40E_HASH_IPV4_L23_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_OTHER),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_TCP,
- ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_TCP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_UDP,
- ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_UDP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_SCTP,
- ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV4_SCTP),
/* IPv6 */
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
- ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
+ RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
I40E_FILTER_PCTYPE_FRAG_IPV6),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
- ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
I40E_HASH_IPV6_L23_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_OTHER),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_TCP,
- ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_TCP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_UDP,
- ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_UDP),
I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_SCTP,
- ETH_RSS_NONFRAG_IPV6_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_FILTER_PCTYPE_NONF_IPV6_SCTP),
/* ESP */
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_UDP_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_UDP_ESP,
- ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
+ RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
/* GTPC */
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPC,
@@ -308,27 +308,27 @@ static const struct i40e_hash_match_pattern match_patterns[] = {
I40E_HASH_IPV4_L234_RSS_MASK,
I40E_CUSTOMIZED_GTPU),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV4,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV6,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU,
I40E_HASH_IPV6_L234_RSS_MASK,
I40E_CUSTOMIZED_GTPU),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV4,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV6,
- ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+ RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
/* L2TPV3 */
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_L2TPV3,
- ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
+ RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_L2TPV3,
- ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
+ RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
/* AH */
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, ETH_RSS_AH,
+ I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, RTE_ETH_RSS_AH,
I40E_CUSTOMIZED_AH_IPV4),
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, ETH_RSS_AH,
+ I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, RTE_ETH_RSS_AH,
I40E_CUSTOMIZED_AH_IPV6),
};
@@ -564,29 +564,29 @@ i40e_hash_get_inset(uint64_t rss_types)
/* If SRC_ONLY and DST_ONLY of the same level are used simultaneously,
* it is the same case as none of them are added.
*/
- mask = rss_types & (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY);
- if (mask == ETH_RSS_L2_SRC_ONLY)
+ mask = rss_types & (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L2_SRC_ONLY)
inset &= ~I40E_INSET_DMAC;
- else if (mask == ETH_RSS_L2_DST_ONLY)
+ else if (mask == RTE_ETH_RSS_L2_DST_ONLY)
inset &= ~I40E_INSET_SMAC;
- mask = rss_types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
- if (mask == ETH_RSS_L3_SRC_ONLY)
+ mask = rss_types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L3_SRC_ONLY)
inset &= ~(I40E_INSET_IPV4_DST | I40E_INSET_IPV6_DST);
- else if (mask == ETH_RSS_L3_DST_ONLY)
+ else if (mask == RTE_ETH_RSS_L3_DST_ONLY)
inset &= ~(I40E_INSET_IPV4_SRC | I40E_INSET_IPV6_SRC);
- mask = rss_types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
- if (mask == ETH_RSS_L4_SRC_ONLY)
+ mask = rss_types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L4_SRC_ONLY)
inset &= ~I40E_INSET_DST_PORT;
- else if (mask == ETH_RSS_L4_DST_ONLY)
+ else if (mask == RTE_ETH_RSS_L4_DST_ONLY)
inset &= ~I40E_INSET_SRC_PORT;
if (rss_types & I40E_HASH_L4_TYPES) {
uint64_t l3_mask = rss_types &
- (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+ (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
uint64_t l4_mask = rss_types &
- (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+ (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
if (l3_mask && !l4_mask)
inset &= ~(I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT);
@@ -825,7 +825,7 @@ i40e_hash_config(struct i40e_pf *pf,
/* Update lookup table */
if (rss_info->queue_num > 0) {
- uint8_t lut[ETH_RSS_RETA_SIZE_512];
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i, j = 0;
for (i = 0; i < hw->func_caps.rss_table_size; i++) {
@@ -932,7 +932,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
"RSS key is ignored when queues specified");
pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
max_queue = i40e_pf_calc_configured_queues_num(pf);
else
max_queue = pf->dev_data->nb_rx_queues;
@@ -1070,22 +1070,22 @@ i40e_hash_validate_rss_types(uint64_t rss_types)
uint64_t type, mask;
/* Validate L2 */
- type = ETH_RSS_ETH & rss_types;
- mask = (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY) & rss_types;
+ type = RTE_ETH_RSS_ETH & rss_types;
+ mask = (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY) & rss_types;
if (!type && mask)
return false;
/* Validate L3 */
- type = (I40E_HASH_L4_TYPES | ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_IPV6 |
- ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
- mask = (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY) & rss_types;
+ type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
+ mask = (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY) & rss_types;
if (!type && mask)
return false;
/* Validate L4 */
- type = (I40E_HASH_L4_TYPES | ETH_RSS_PORT) & rss_types;
- mask = (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY) & rss_types;
+ type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_PORT) & rss_types;
+ mask = (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY) & rss_types;
if (!type && mask)
return false;
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index e2d8b2b5f7f1..ccb3924a5f68 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -1207,24 +1207,24 @@ i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
event.event_data.link_event.link_status =
dev->data->dev_link.link_status;
- /* need to convert the ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
+ /* need to convert the RTE_ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
switch (dev->data->dev_link.link_speed) {
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_100MB;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_1GB;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_10GB;
break;
- case ETH_SPEED_NUM_20G:
+ case RTE_ETH_SPEED_NUM_20G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_20GB;
break;
- case ETH_SPEED_NUM_25G:
+ case RTE_ETH_SPEED_NUM_25G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_25GB;
break;
- case ETH_SPEED_NUM_40G:
+ case RTE_ETH_SPEED_NUM_40G:
event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_40GB;
break;
default:
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 8329cbdd4e30..3bad4052ed1b 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1329,7 +1329,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
for (i = 0; i < tx_rs_thresh; i++)
rte_prefetch0((txep + i)->mbuf);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
if (k) {
for (j = 0; j != k; j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
for (i = 0; i < RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
@@ -2005,7 +2005,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->queue_id = queue_idx;
rxq->reg_idx = reg_idx;
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -2265,7 +2265,7 @@ i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
}
/* check simple tx conflict */
if (ad->tx_simple_allowed) {
- if ((txq->offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
+ if ((txq->offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST) {
PMD_DRV_LOG(ERR, "No-simple tx is required.");
return -EINVAL;
@@ -2925,7 +2925,7 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
rxq->max_pkt_len =
RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len *
rxq->rx_buf_len), data->dev_conf.rxmode.max_rx_pkt_len);
- if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
@@ -3441,7 +3441,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
/* Use a simple Tx queue if possible (only fast free is allowed) */
ad->tx_simple_allowed =
(txq->offloads ==
- (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+ (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
txq->tx_rs_thresh >= RTE_PMD_I40E_TX_MAX_BURST);
ad->tx_vec_allowed = (ad->tx_simple_allowed &&
txq->tx_rs_thresh <= RTE_I40E_TX_MAX_FREE_BUF_SZ);
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 5ccf5773e857..303a4db47dbd 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -120,7 +120,7 @@ struct i40e_rx_queue {
bool rx_deferred_start; /**< don't start this queue in dev start */
uint16_t rx_using_sse; /**<flag indicate the usage of vPMD for rx */
uint8_t dcb_tc; /**< Traffic class of rx queue */
- uint64_t offloads; /**< Rx offload flags of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Rx offload flags of RTE_ETH_RX_OFFLOAD_* */
};
struct i40e_tx_entry {
@@ -165,7 +165,7 @@ struct i40e_tx_queue {
bool q_set; /**< indicate if tx queue has been configured */
bool tx_deferred_start; /**< don't start this queue in dev start */
uint8_t dcb_tc; /**< Traffic class of tx queue */
- uint64_t offloads; /**< Tx offload flags of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Tx offload flags of RTE_ETH_RX_OFFLOAD_* */
};
/** Offload features */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index bd21d6422394..5f00d43950aa 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -899,7 +899,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
txep = (void *)txq->sw_ring;
txep += txq->tx_next_dd - (n - 1);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
void **cache_objs;
struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index f52ed98d62d0..0192164c35fa 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -100,7 +100,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
*/
txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
for (i = 0; i < n; i++) {
free[i] = txep[i].mbuf;
txep[i].mbuf = NULL;
@@ -211,7 +211,7 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
struct i40e_rx_queue *rxq;
uint16_t desc, i;
bool first_queue;
@@ -221,11 +221,11 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
return -1;
/* no header split support */
- if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
return -1;
/* no QinQ support */
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
return -1;
/**
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 0481b5538132..6d90b0f3511b 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -42,30 +42,30 @@ i40e_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX;
dev_info->hash_key_size = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_64;
dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
dev_info->max_mac_addrs = I40E_NUM_MACADDR_MAX;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -385,19 +385,19 @@ i40e_vf_representor_vlan_offload_set(struct rte_eth_dev *ethdev, int mask)
return -EINVAL;
}
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* Enable or disable VLAN filtering offload */
if (ethdev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
return i40e_vsi_config_vlan_filter(vsi, TRUE);
else
return i40e_vsi_config_vlan_filter(vsi, FALSE);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping offload */
if (ethdev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP)
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
return i40e_vsi_config_vlan_stripping(vsi, TRUE);
else
return i40e_vsi_config_vlan_stripping(vsi, FALSE);
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index b3bd07811198..1d4383e89327 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -48,18 +48,18 @@
VIRTCHNL_VF_OFFLOAD_RX_POLLING)
#define IAVF_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
#define IAVF_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
#define IAVF_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 574cfe055e7c..94f6f4704b9c 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -265,53 +265,53 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
static const uint64_t map_hena_rss[] = {
/* IPv4 */
[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP] =
- ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP] =
- ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_UDP] =
- ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK] =
- ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP] =
- ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_SCTP] =
- ETH_RSS_NONFRAG_IPV4_SCTP,
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
[IAVF_FILTER_PCTYPE_NONF_IPV4_OTHER] =
- ETH_RSS_NONFRAG_IPV4_OTHER,
- [IAVF_FILTER_PCTYPE_FRAG_IPV4] = ETH_RSS_FRAG_IPV4,
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+ [IAVF_FILTER_PCTYPE_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
/* IPv6 */
[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP] =
- ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP,
[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP] =
- ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_UDP] =
- ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK] =
- ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP] =
- ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_SCTP] =
- ETH_RSS_NONFRAG_IPV6_SCTP,
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
[IAVF_FILTER_PCTYPE_NONF_IPV6_OTHER] =
- ETH_RSS_NONFRAG_IPV6_OTHER,
- [IAVF_FILTER_PCTYPE_FRAG_IPV6] = ETH_RSS_FRAG_IPV6,
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+ [IAVF_FILTER_PCTYPE_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
/* L2 Payload */
- [IAVF_FILTER_PCTYPE_L2_PAYLOAD] = ETH_RSS_L2_PAYLOAD
+ [IAVF_FILTER_PCTYPE_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
};
- const uint64_t ipv4_rss = ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP |
- ETH_RSS_NONFRAG_IPV4_OTHER |
- ETH_RSS_FRAG_IPV4;
+ const uint64_t ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+ RTE_ETH_RSS_FRAG_IPV4;
- const uint64_t ipv6_rss = ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_SCTP |
- ETH_RSS_NONFRAG_IPV6_OTHER |
- ETH_RSS_FRAG_IPV6;
+ const uint64_t ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_FRAG_IPV6;
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
uint64_t caps = 0, hena = 0, valid_rss_hf = 0;
@@ -330,13 +330,13 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
}
/**
- * ETH_RSS_IPV4 and ETH_RSS_IPV6 can be considered as 2
+ * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
* generalizations of all other IPv4 and IPv6 RSS types.
*/
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
rss_hf |= ipv4_rss;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
rss_hf |= ipv6_rss;
RTE_BUILD_BUG_ON(RTE_DIM(map_hena_rss) > sizeof(uint64_t) * CHAR_BIT);
@@ -362,10 +362,10 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
}
if (valid_rss_hf & ipv4_rss)
- valid_rss_hf |= rss_hf & ETH_RSS_IPV4;
+ valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
if (valid_rss_hf & ipv6_rss)
- valid_rss_hf |= rss_hf & ETH_RSS_IPV6;
+ valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
if (rss_hf & ~valid_rss_hf)
PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
@@ -466,7 +466,7 @@ iavf_dev_vlan_insert_set(struct rte_eth_dev *dev)
return 0;
enable = !!(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_VLAN_INSERT);
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
iavf_config_vlan_insert_v2(adapter, enable);
return 0;
@@ -478,10 +478,10 @@ iavf_dev_init_vlan(struct rte_eth_dev *dev)
int err;
err = iavf_dev_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK |
- ETH_QINQ_STRIP_MASK |
- ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK);
+ RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_QINQ_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK);
if (err) {
PMD_DRV_LOG(ERR, "Failed to update vlan offload");
return err;
@@ -511,8 +511,8 @@ iavf_dev_configure(struct rte_eth_dev *dev)
ad->rx_vec_allowed = true;
ad->tx_vec_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Large VF setting */
if (num_queue_pairs > IAVF_MAX_NUM_QUEUES_DFLT) {
@@ -585,7 +585,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
if (max_pkt_len <= IAVF_ETH_MAX_LEN ||
max_pkt_len > IAVF_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -608,7 +608,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
}
rxq->max_pkt_len = max_pkt_len;
- if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
rxq->max_pkt_len > buf_size) {
dev_data->scattered_rx = 1;
}
@@ -943,35 +943,35 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->flow_type_rss_offloads = IAVF_RSS_OFFLOAD_ALL;
dev_info->max_mac_addrs = IAVF_NUM_MACADDR_MAX;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_CRC)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_KEEP_CRC;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_free_thresh = IAVF_DEFAULT_RX_FREE_THRESH,
@@ -1031,42 +1031,42 @@ iavf_dev_link_update(struct rte_eth_dev *dev,
*/
switch (vf->link_speed) {
case 10:
- new_link.link_speed = ETH_SPEED_NUM_10M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case 100:
- new_link.link_speed = ETH_SPEED_NUM_100M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case 1000:
- new_link.link_speed = ETH_SPEED_NUM_1G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case 10000:
- new_link.link_speed = ETH_SPEED_NUM_10G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case 20000:
- new_link.link_speed = ETH_SPEED_NUM_20G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case 25000:
- new_link.link_speed = ETH_SPEED_NUM_25G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case 40000:
- new_link.link_speed = ETH_SPEED_NUM_40G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case 50000:
- new_link.link_speed = ETH_SPEED_NUM_50G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case 100000:
- new_link.link_speed = ETH_SPEED_NUM_100G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- new_link.link_speed = ETH_SPEED_NUM_NONE;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
- new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
- new_link.link_status = vf->link_up ? ETH_LINK_UP :
- ETH_LINK_DOWN;
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ new_link.link_status = vf->link_up ? RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN;
new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
return rte_eth_linkstatus_set(dev, &new_link);
}
@@ -1214,14 +1214,14 @@ iavf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
bool enable;
int err;
- if (mask & ETH_VLAN_FILTER_MASK) {
- enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
iavf_iterate_vlan_filters_v2(dev, enable);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
- enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
err = iavf_config_vlan_strip_v2(adapter, enable);
/* If not support, the stripping is already disabled by PF */
@@ -1250,9 +1250,9 @@ iavf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return -ENOTSUP;
/* Vlan stripping setting */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
/* Enable or disable VLAN stripping */
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
err = iavf_enable_vlan_strip(adapter);
else
err = iavf_disable_vlan_strip(adapter);
@@ -1457,10 +1457,10 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (frame_size > IAVF_ETH_MAX_LEN)
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -1564,7 +1564,7 @@ iavf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
ret = iavf_query_stats(adapter, &pstats);
if (ret == 0) {
uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_KEEP_CRC) ? 0 :
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
RTE_ETHER_CRC_LEN;
iavf_update_stats(vsi, pstats);
stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index 2b03dad8589c..1329a389f742 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -341,83 +341,83 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
/* rss type super set */
/* IPv4 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4)
+#define IAVF_RSS_TYPE_OUTER_IPV4 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4)
#define IAVF_RSS_TYPE_OUTER_IPV4_UDP (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP)
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
#define IAVF_RSS_TYPE_OUTER_IPV4_TCP (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP)
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP)
#define IAVF_RSS_TYPE_OUTER_IPV4_SCTP (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
/* IPv6 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV6 (ETH_RSS_ETH | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_OUTER_IPV6 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
#define IAVF_RSS_TYPE_OUTER_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_FRAG_IPV6)
+ RTE_ETH_RSS_FRAG_IPV6)
#define IAVF_RSS_TYPE_OUTER_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define IAVF_RSS_TYPE_OUTER_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
#define IAVF_RSS_TYPE_OUTER_IPV6_SCTP (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
/* VLAN IPV4 */
#define IAVF_RSS_TYPE_VLAN_IPV4 (IAVF_RSS_TYPE_OUTER_IPV4 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV4_UDP (IAVF_RSS_TYPE_OUTER_IPV4_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV4_TCP (IAVF_RSS_TYPE_OUTER_IPV4_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV4_SCTP (IAVF_RSS_TYPE_OUTER_IPV4_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* VLAN IPv6 */
#define IAVF_RSS_TYPE_VLAN_IPV6 (IAVF_RSS_TYPE_OUTER_IPV6 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6_FRAG | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define IAVF_RSS_TYPE_VLAN_IPV6_SCTP (IAVF_RSS_TYPE_OUTER_IPV6_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* IPv4 inner */
-#define IAVF_RSS_TYPE_INNER_IPV4 ETH_RSS_IPV4
-#define IAVF_RSS_TYPE_INNER_IPV4_UDP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV4_TCP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV4_SCTP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV4 RTE_ETH_RSS_IPV4
+#define IAVF_RSS_TYPE_INNER_IPV4_UDP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV4_TCP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV4_SCTP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
/* IPv6 inner */
-#define IAVF_RSS_TYPE_INNER_IPV6 ETH_RSS_IPV6
-#define IAVF_RSS_TYPE_INNER_IPV6_UDP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV6_TCP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV6_SCTP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV6 RTE_ETH_RSS_IPV6
+#define IAVF_RSS_TYPE_INNER_IPV6_UDP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV6_TCP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV6_SCTP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
/* GTPU IPv4 */
#define IAVF_RSS_TYPE_GTPU_IPV4 (IAVF_RSS_TYPE_INNER_IPV4 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV4_UDP (IAVF_RSS_TYPE_INNER_IPV4_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV4_TCP (IAVF_RSS_TYPE_INNER_IPV4_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* GTPU IPv6 */
#define IAVF_RSS_TYPE_GTPU_IPV6 (IAVF_RSS_TYPE_INNER_IPV6 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV6_UDP (IAVF_RSS_TYPE_INNER_IPV6_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define IAVF_RSS_TYPE_GTPU_IPV6_TCP (IAVF_RSS_TYPE_INNER_IPV6_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* ESP, AH, L2TPV3 and PFCP */
-#define IAVF_RSS_TYPE_IPV4_ESP (ETH_RSS_ESP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV4_AH (ETH_RSS_AH | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_ESP (ETH_RSS_ESP | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV6_AH (ETH_RSS_AH | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV4_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV6_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
/**
* Supported pattern for hash.
@@ -435,7 +435,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_vlan_ipv4_udp, IAVF_RSS_TYPE_VLAN_IPV4_UDP, &outer_ipv4_udp_tmplt},
{iavf_pattern_eth_vlan_ipv4_tcp, IAVF_RSS_TYPE_VLAN_IPV4_TCP, &outer_ipv4_tcp_tmplt},
{iavf_pattern_eth_vlan_ipv4_sctp, IAVF_RSS_TYPE_VLAN_IPV4_SCTP, &outer_ipv4_sctp_tmplt},
- {iavf_pattern_eth_ipv4_gtpu, ETH_RSS_IPV4, &outer_ipv4_udp_tmplt},
+ {iavf_pattern_eth_ipv4_gtpu, RTE_ETH_RSS_IPV4, &outer_ipv4_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv4, IAVF_RSS_TYPE_GTPU_IPV4, &inner_ipv4_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv4_udp, IAVF_RSS_TYPE_GTPU_IPV4_UDP, &inner_ipv4_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv4_tcp, IAVF_RSS_TYPE_GTPU_IPV4_TCP, &inner_ipv4_tcp_tmplt},
@@ -477,9 +477,9 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_ipv4_ah, IAVF_RSS_TYPE_IPV4_AH, &ipv4_ah_tmplt},
{iavf_pattern_eth_ipv4_l2tpv3, IAVF_RSS_TYPE_IPV4_L2TPV3, &ipv4_l2tpv3_tmplt},
{iavf_pattern_eth_ipv4_pfcp, IAVF_RSS_TYPE_IPV4_PFCP, &ipv4_pfcp_tmplt},
- {iavf_pattern_eth_ipv4_gtpc, ETH_RSS_IPV4, &ipv4_udp_gtpc_tmplt},
- {iavf_pattern_eth_ecpri, ETH_RSS_ECPRI, ð_ecpri_tmplt},
- {iavf_pattern_eth_ipv4_ecpri, ETH_RSS_ECPRI, &ipv4_ecpri_tmplt},
+ {iavf_pattern_eth_ipv4_gtpc, RTE_ETH_RSS_IPV4, &ipv4_udp_gtpc_tmplt},
+ {iavf_pattern_eth_ecpri, RTE_ETH_RSS_ECPRI, ð_ecpri_tmplt},
+ {iavf_pattern_eth_ipv4_ecpri, RTE_ETH_RSS_ECPRI, &ipv4_ecpri_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv4, IAVF_RSS_TYPE_INNER_IPV4, &inner_ipv4_tmplt},
{iavf_pattern_eth_ipv6_gre_ipv4, IAVF_RSS_TYPE_INNER_IPV4, &inner_ipv4_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv4_tcp, IAVF_RSS_TYPE_INNER_IPV4_TCP, &inner_ipv4_tcp_tmplt},
@@ -497,7 +497,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_vlan_ipv6_udp, IAVF_RSS_TYPE_VLAN_IPV6_UDP, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_vlan_ipv6_tcp, IAVF_RSS_TYPE_VLAN_IPV6_TCP, &outer_ipv6_tcp_tmplt},
{iavf_pattern_eth_vlan_ipv6_sctp, IAVF_RSS_TYPE_VLAN_IPV6_SCTP, &outer_ipv6_sctp_tmplt},
- {iavf_pattern_eth_ipv6_gtpu, ETH_RSS_IPV6, &outer_ipv6_udp_tmplt},
+ {iavf_pattern_eth_ipv6_gtpu, RTE_ETH_RSS_IPV6, &outer_ipv6_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv6, IAVF_RSS_TYPE_GTPU_IPV6, &inner_ipv6_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv6_udp, IAVF_RSS_TYPE_GTPU_IPV6_UDP, &inner_ipv6_udp_tmplt},
{iavf_pattern_eth_ipv4_gtpu_ipv6_tcp, IAVF_RSS_TYPE_GTPU_IPV6_TCP, &inner_ipv6_tcp_tmplt},
@@ -539,7 +539,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
{iavf_pattern_eth_ipv6_ah, IAVF_RSS_TYPE_IPV6_AH, &ipv6_ah_tmplt},
{iavf_pattern_eth_ipv6_l2tpv3, IAVF_RSS_TYPE_IPV6_L2TPV3, &ipv6_l2tpv3_tmplt},
{iavf_pattern_eth_ipv6_pfcp, IAVF_RSS_TYPE_IPV6_PFCP, &ipv6_pfcp_tmplt},
- {iavf_pattern_eth_ipv6_gtpc, ETH_RSS_IPV6, &ipv6_udp_gtpc_tmplt},
+ {iavf_pattern_eth_ipv6_gtpc, RTE_ETH_RSS_IPV6, &ipv6_udp_gtpc_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv6, IAVF_RSS_TYPE_INNER_IPV6, &inner_ipv6_tmplt},
{iavf_pattern_eth_ipv6_gre_ipv6, IAVF_RSS_TYPE_INNER_IPV6, &inner_ipv6_tmplt},
{iavf_pattern_eth_ipv4_gre_ipv6_tcp, IAVF_RSS_TYPE_INNER_IPV6_TCP, &inner_ipv6_tcp_tmplt},
@@ -573,57 +573,57 @@ iavf_rss_hash_set(struct iavf_adapter *ad, uint64_t rss_hf, bool add)
struct virtchnl_rss_cfg rss_cfg;
#define IAVF_RSS_HF_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
rss_cfg.rss_algorithm = VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC;
- if (rss_hf & ETH_RSS_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_IPV4) {
rss_cfg.proto_hdrs = inner_ipv4_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
rss_cfg.proto_hdrs = inner_ipv4_udp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
rss_cfg.proto_hdrs = inner_ipv4_tcp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
rss_cfg.proto_hdrs = inner_ipv4_sctp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_IPV6) {
rss_cfg.proto_hdrs = inner_ipv6_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
rss_cfg.proto_hdrs = inner_ipv6_udp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
rss_cfg.proto_hdrs = inner_ipv6_tcp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
rss_cfg.proto_hdrs = inner_ipv6_sctp_tmplt;
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_FRAG_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_FRAG_IPV4) {
struct virtchnl_proto_hdrs hdr = {
.tunnel_level = TUNNEL_LEVEL_OUTER,
.count = 3,
@@ -641,7 +641,7 @@ iavf_rss_hash_set(struct iavf_adapter *ad, uint64_t rss_hf, bool add)
iavf_add_del_rss_cfg(ad, &rss_cfg, add);
}
- if (rss_hf & ETH_RSS_FRAG_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_FRAG_IPV6) {
struct virtchnl_proto_hdrs hdr = {
.tunnel_level = TUNNEL_LEVEL_OUTER,
.count = 3,
@@ -804,28 +804,28 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
hdr = &proto_hdrs->proto_hdr[i];
switch (hdr->type) {
case VIRTCHNL_PROTO_HDR_ETH:
- if (!(rss_type & ETH_RSS_ETH))
+ if (!(rss_type & RTE_ETH_RSS_ETH))
hdr->field_selector = 0;
- else if (rss_type & ETH_RSS_L2_SRC_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
REFINE_PROTO_FLD(DEL, ETH_DST);
- else if (rss_type & ETH_RSS_L2_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
REFINE_PROTO_FLD(DEL, ETH_SRC);
break;
case VIRTCHNL_PROTO_HDR_IPV4:
if (rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV4) {
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
iavf_hash_add_fragment_hdr(proto_hdrs, i + 1);
- } else if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_DST);
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
REFINE_PROTO_FLD(DEL, IPV4_SRC);
} else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY)) {
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY)) {
REFINE_PROTO_FLD(DEL, IPV4_DST);
REFINE_PROTO_FLD(DEL, IPV4_SRC);
}
@@ -835,11 +835,11 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_IPV4_FRAG:
if (rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV4)
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV4)
REFINE_PROTO_FLD(ADD, IPV4_FRAG_PKID);
} else {
hdr->field_selector = 0;
@@ -847,17 +847,17 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_IPV6:
if (rss_type &
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
REFINE_PROTO_FLD(DEL, IPV6_DST);
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
REFINE_PROTO_FLD(DEL, IPV6_SRC);
} else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY)) {
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY)) {
REFINE_PROTO_FLD(DEL, IPV6_DST);
REFINE_PROTO_FLD(DEL, IPV6_SRC);
}
@@ -874,7 +874,7 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
}
break;
case VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG:
- if (rss_type & ETH_RSS_FRAG_IPV6)
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
REFINE_PROTO_FLD(ADD, IPV6_EH_FRAG_PKID);
else
hdr->field_selector = 0;
@@ -882,15 +882,15 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_UDP:
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
REFINE_PROTO_FLD(DEL, UDP_DST_PORT);
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
REFINE_PROTO_FLD(DEL, UDP_SRC_PORT);
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
hdr->field_selector = 0;
} else {
hdr->field_selector = 0;
@@ -898,15 +898,15 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_TCP:
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
REFINE_PROTO_FLD(DEL, TCP_DST_PORT);
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
REFINE_PROTO_FLD(DEL, TCP_SRC_PORT);
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
hdr->field_selector = 0;
} else {
hdr->field_selector = 0;
@@ -914,46 +914,46 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
break;
case VIRTCHNL_PROTO_HDR_SCTP:
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_SCTP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
REFINE_PROTO_FLD(DEL, SCTP_DST_PORT);
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
REFINE_PROTO_FLD(DEL, SCTP_SRC_PORT);
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
hdr->field_selector = 0;
} else {
hdr->field_selector = 0;
}
break;
case VIRTCHNL_PROTO_HDR_S_VLAN:
- if (!(rss_type & ETH_RSS_S_VLAN))
+ if (!(rss_type & RTE_ETH_RSS_S_VLAN))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_C_VLAN:
- if (!(rss_type & ETH_RSS_C_VLAN))
+ if (!(rss_type & RTE_ETH_RSS_C_VLAN))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_L2TPV3:
- if (!(rss_type & ETH_RSS_L2TPV3))
+ if (!(rss_type & RTE_ETH_RSS_L2TPV3))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_ESP:
- if (!(rss_type & ETH_RSS_ESP))
+ if (!(rss_type & RTE_ETH_RSS_ESP))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_AH:
- if (!(rss_type & ETH_RSS_AH))
+ if (!(rss_type & RTE_ETH_RSS_AH))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_PFCP:
- if (!(rss_type & ETH_RSS_PFCP))
+ if (!(rss_type & RTE_ETH_RSS_PFCP))
hdr->field_selector = 0;
break;
case VIRTCHNL_PROTO_HDR_ECPRI:
- if (!(rss_type & ETH_RSS_ECPRI))
+ if (!(rss_type & RTE_ETH_RSS_ECPRI))
hdr->field_selector = 0;
break;
default:
@@ -970,7 +970,7 @@ iavf_refine_proto_hdrs_gtpu(struct virtchnl_proto_hdrs *proto_hdrs,
struct virtchnl_proto_hdr *hdr;
int i;
- if (!(rss_type & ETH_RSS_GTPU))
+ if (!(rss_type & RTE_ETH_RSS_GTPU))
return;
for (i = 0; i < proto_hdrs->count; i++) {
@@ -1067,10 +1067,10 @@ static void iavf_refine_proto_hdrs(struct virtchnl_proto_hdrs *proto_hdrs,
}
static uint64_t invalid_rss_comb[] = {
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
RTE_ETH_RSS_L3_PRE32 | RTE_ETH_RSS_L3_PRE40 |
RTE_ETH_RSS_L3_PRE48 | RTE_ETH_RSS_L3_PRE56 |
RTE_ETH_RSS_L3_PRE96
@@ -1081,27 +1081,27 @@ struct rss_attr_type {
uint64_t type;
};
-#define VALID_RSS_IPV4_L4 (ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4 (RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
-#define VALID_RSS_IPV6_L4 (ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4 (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
-#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4 (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6 (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
VALID_RSS_IPV6_L4)
#define VALID_RSS_L3 (VALID_RSS_IPV4 | VALID_RSS_IPV6)
#define VALID_RSS_L4 (VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
-#define VALID_RSS_ATTR (ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY | \
- ETH_RSS_L2_SRC_ONLY | \
- ETH_RSS_L2_DST_ONLY | \
+#define VALID_RSS_ATTR (RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY | \
+ RTE_ETH_RSS_L2_SRC_ONLY | \
+ RTE_ETH_RSS_L2_DST_ONLY | \
RTE_ETH_RSS_L3_PRE64)
#define INVALID_RSS_ATTR (RTE_ETH_RSS_L3_PRE32 | \
@@ -1111,9 +1111,9 @@ struct rss_attr_type {
RTE_ETH_RSS_L3_PRE96)
static struct rss_attr_type rss_attr_to_valid_type[] = {
- {ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY, ETH_RSS_ETH},
- {ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
- {ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
+ {RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY, RTE_ETH_RSS_ETH},
+ {RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
+ {RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
/* current ipv6 prefix only supports prefix 64 bits*/
{RTE_ETH_RSS_L3_PRE64, VALID_RSS_IPV6},
{INVALID_RSS_ATTR, 0}
@@ -1130,15 +1130,15 @@ iavf_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
* hash function.
*/
if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
- if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
- ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+ if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
return true;
if (!(rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
- ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
return true;
}
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index e33fe4576b6e..4ff856fc82aa 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -609,7 +609,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
rxq->vsi = vsi;
rxq->offloads = offloads;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index e210b913d633..096be81e8a69 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -24,22 +24,22 @@
#define IAVF_VPMD_TX_MAX_FREE_BUF 64
#define IAVF_TX_NO_VECTOR_FLAGS ( \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_TCP_TSO)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
#define IAVF_TX_VECTOR_OFFLOAD ( \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define IAVF_RX_VECTOR_OFFLOAD ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_VLAN | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define IAVF_VECTOR_PATH 0
#define IAVF_VECTOR_OFFLOAD_PATH 1
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 475070e036ef..8f9a397e4143 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -904,7 +904,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH ||
+ RTE_ETH_RX_OFFLOAD_RSS_HASH ||
rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
@@ -957,7 +957,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
raw_desc_bh1, 1);
if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/**
* to shift the 32b RSS hash value to the
* highest 32b of each 128b before mask
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 571161c0cdec..2329928c62cb 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1138,7 +1138,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH ||
+ RTE_ETH_RX_OFFLOAD_RSS_HASH ||
rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
@@ -1191,7 +1191,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
raw_desc_bh1, 1);
if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/**
* to shift the 32b RSS hash value to the
* highest 32b of each 128b before mask
@@ -1719,7 +1719,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
txep = (void *)txq->sw_ring;
txep += txq->next_dd - (n - 1);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
rte_lcore_id());
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index ee1e9055259b..58f928bdd7ca 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -818,7 +818,7 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh3 =
_mm_load_si128
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 4c2e0c7216fd..ec53478083b4 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -807,7 +807,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
PMD_DRV_LOG(DEBUG, "RSS is not supported");
return -ENOTSUP;
}
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
/* set all lut items to default queue */
memset(hw->rss_lut, 0, hw->vf_res->rss_lut_size);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index cab7c4da8759..6226aa5a80c2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -66,7 +66,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
if (max_pkt_len <= ICE_ETH_MAX_LEN ||
max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -89,7 +89,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
}
rxq->max_pkt_len = max_pkt_len;
- if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
(rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size) {
dev_data->scattered_rx = 1;
}
@@ -559,7 +559,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -620,7 +620,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
}
ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
ad->pf.adapter_stopped = 1;
return 0;
@@ -635,8 +635,8 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
ad->rx_bulk_alloc_allowed = true;
ad->tx_simple_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
return 0;
}
@@ -658,28 +658,28 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -896,42 +896,42 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
*/
switch (hw->link_speed) {
case 10:
- new_link.link_speed = ETH_SPEED_NUM_10M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case 100:
- new_link.link_speed = ETH_SPEED_NUM_100M;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case 1000:
- new_link.link_speed = ETH_SPEED_NUM_1G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case 10000:
- new_link.link_speed = ETH_SPEED_NUM_10G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case 20000:
- new_link.link_speed = ETH_SPEED_NUM_20G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case 25000:
- new_link.link_speed = ETH_SPEED_NUM_25G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case 40000:
- new_link.link_speed = ETH_SPEED_NUM_40G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case 50000:
- new_link.link_speed = ETH_SPEED_NUM_50G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case 100000:
- new_link.link_speed = ETH_SPEED_NUM_100G;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- new_link.link_speed = ETH_SPEED_NUM_NONE;
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
- new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
- new_link.link_status = hw->link_up ? ETH_LINK_UP :
- ETH_LINK_DOWN;
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ new_link.link_status = hw->link_up ? RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN;
new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
return rte_eth_linkstatus_set(dev, &new_link);
}
@@ -950,11 +950,11 @@ ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ice_create_tunnel(parent_hw, TNL_VXLAN,
udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_ECPRI:
+ case RTE_ETH_TUNNEL_TYPE_ECPRI:
ret = ice_create_tunnel(parent_hw, TNL_ECPRI,
udp_tunnel->udp_port);
break;
@@ -981,8 +981,8 @@ ice_dcf_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
- case RTE_TUNNEL_TYPE_ECPRI:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_ECPRI:
ret = ice_destroy_tunnel(parent_hw, udp_tunnel->udp_port, 0);
break;
default:
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 970461f3e90a..0dac1b92bfdb 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -37,7 +37,7 @@ ice_dcf_vf_repr_dev_configure(struct rte_eth_dev *dev)
static int
ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -45,7 +45,7 @@ ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
static int
ice_dcf_vf_repr_dev_stop(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -135,29 +135,29 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -239,9 +239,9 @@ ice_dcf_vf_repr_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return -ENOTSUP;
/* Vlan stripping setting */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
bool enable = !!(dev_conf->rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP);
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
if (enable && repr->outer_vlan_info.port_vlan_ena) {
PMD_DRV_LOG(ERR,
@@ -338,7 +338,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
if (!ice_dcf_vlan_offload_ena(repr))
return -ENOTSUP;
- if (vlan_type != ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
PMD_DRV_LOG(ERR,
"Can accelerate only outer VLAN in QinQ\n");
return -EINVAL;
@@ -368,7 +368,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
if (repr->outer_vlan_info.stripping_ena) {
err = ice_dcf_vf_repr_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK);
+ RTE_ETH_VLAN_STRIP_MASK);
if (err) {
PMD_DRV_LOG(ERR,
"Failed to reset VLAN stripping : %d\n",
@@ -441,7 +441,7 @@ ice_dcf_vf_repr_init_vlan(struct rte_eth_dev *vf_rep_eth_dev)
int err;
err = ice_dcf_vf_repr_vlan_offload_set(vf_rep_eth_dev,
- ETH_VLAN_STRIP_MASK);
+ RTE_ETH_VLAN_STRIP_MASK);
if (err) {
PMD_DRV_LOG(ERR, "Failed to set VLAN offload");
return err;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a4cd39c954f1..d79cc549da19 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1449,9 +1449,9 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
TAILQ_INIT(&vsi->mac_list);
TAILQ_INIT(&vsi->vlan_list);
- /* Be sync with ETH_RSS_RETA_SIZE_x maximum value definition */
+ /* Be sync with RTE_ETH_RSS_RETA_SIZE_x maximum value definition */
pf->hash_lut_size = hw->func_caps.common_cap.rss_table_size >
- ETH_RSS_RETA_SIZE_512 ? ETH_RSS_RETA_SIZE_512 :
+ RTE_ETH_RSS_RETA_SIZE_512 ? RTE_ETH_RSS_RETA_SIZE_512 :
hw->func_caps.common_cap.rss_table_size;
pf->flags |= ICE_FLAG_RSS_AQ_CAPABLE;
@@ -2809,16 +2809,16 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
int ret;
#define ICE_RSS_HF_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_FRAG_IPV6)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV6)
ret = ice_rem_vsi_rss_cfg(hw, vsi->idx);
if (ret)
@@ -2828,7 +2828,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
cfg.symm = 0;
cfg.hdr_type = ICE_RSS_OUTER_HEADERS;
/* Configure RSS for IPv4 with src/dst addr as input set */
- if (rss_hf & ETH_RSS_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_IPV4) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV4;
ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -2838,7 +2838,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for IPv6 with src/dst addr as input set */
- if (rss_hf & ETH_RSS_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_IPV6) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV6;
ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -2848,7 +2848,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for udp4 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -2859,7 +2859,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for udp6 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -2870,7 +2870,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for tcp4 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -2881,7 +2881,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for tcp6 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -2892,7 +2892,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for sctp4 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_SCTP_IPV4;
@@ -2903,7 +2903,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
}
/* Configure RSS for sctp6 with src/dst addr and port as input set */
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_SCTP_IPV6;
@@ -2913,7 +2913,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_IPV4) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV4 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV4;
@@ -2923,7 +2923,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_IPV6) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV6 |
ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_FLOW_HASH_IPV6;
@@ -2933,7 +2933,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -2943,7 +2943,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -2953,7 +2953,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -2963,7 +2963,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -2973,7 +2973,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_FRAG_IPV4) {
+ if (rss_hf & RTE_ETH_RSS_FRAG_IPV4) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_FRAG;
cfg.hash_flds = ICE_FLOW_HASH_IPV4 | BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID);
ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -2982,7 +2982,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
__func__, ret);
}
- if (rss_hf & ETH_RSS_FRAG_IPV6) {
+ if (rss_hf & RTE_ETH_RSS_FRAG_IPV6) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_FRAG;
cfg.hash_flds = ICE_FLOW_HASH_IPV6 | BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_ID);
ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3124,8 +3124,8 @@ ice_dev_configure(struct rte_eth_dev *dev)
ad->rx_bulk_alloc_allowed = true;
ad->tx_simple_allowed = true;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (dev->data->nb_rx_queues) {
ret = ice_init_rss(pf);
@@ -3344,8 +3344,8 @@ ice_dev_start(struct rte_eth_dev *dev)
ice_set_rx_function(dev);
ice_set_tx_function(dev);
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = ice_vlan_offload_set(dev, mask);
if (ret) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
@@ -3449,40 +3449,40 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->flow_type_rss_offloads = 0;
if (!is_safe_mode) {
dev_info->rx_offload_capa |=
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa |=
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL;
}
dev_info->rx_queue_offload_capa = 0;
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->reta_size = pf->hash_lut_size;
dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
@@ -3521,24 +3521,24 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_align = ICE_ALIGN_RING_DESC,
};
- dev_info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_5G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_20G |
- ETH_LINK_SPEED_25G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_5G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_20G |
+ RTE_ETH_LINK_SPEED_25G;
phy_type_low = hw->port_info->phy.phy_type_low;
phy_type_high = hw->port_info->phy.phy_type_high;
if (ICE_PHY_TYPE_SUPPORT_50G(phy_type_low))
- dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (ICE_PHY_TYPE_SUPPORT_100G_LOW(phy_type_low) ||
ICE_PHY_TYPE_SUPPORT_100G_HIGH(phy_type_high))
- dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
dev_info->nb_rx_queues = dev->data->nb_rx_queues;
dev_info->nb_tx_queues = dev->data->nb_tx_queues;
@@ -3603,8 +3603,8 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
status = ice_aq_get_link_info(hw->port_info, enable_lse,
&link_status, NULL);
if (status != ICE_SUCCESS) {
- link.link_speed = ETH_SPEED_NUM_100M;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_DRV_LOG(ERR, "Failed to get link info");
goto out;
}
@@ -3620,55 +3620,55 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
goto out;
/* Full-duplex operation at all supported speeds */
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
/* Parse the link status */
switch (link_status.link_speed) {
case ICE_AQ_LINK_SPEED_10MB:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case ICE_AQ_LINK_SPEED_100MB:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case ICE_AQ_LINK_SPEED_1000MB:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case ICE_AQ_LINK_SPEED_2500MB:
- link.link_speed = ETH_SPEED_NUM_2_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case ICE_AQ_LINK_SPEED_5GB:
- link.link_speed = ETH_SPEED_NUM_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case ICE_AQ_LINK_SPEED_10GB:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case ICE_AQ_LINK_SPEED_20GB:
- link.link_speed = ETH_SPEED_NUM_20G;
+ link.link_speed = RTE_ETH_SPEED_NUM_20G;
break;
case ICE_AQ_LINK_SPEED_25GB:
- link.link_speed = ETH_SPEED_NUM_25G;
+ link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case ICE_AQ_LINK_SPEED_40GB:
- link.link_speed = ETH_SPEED_NUM_40G;
+ link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case ICE_AQ_LINK_SPEED_50GB:
- link.link_speed = ETH_SPEED_NUM_50G;
+ link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case ICE_AQ_LINK_SPEED_100GB:
- link.link_speed = ETH_SPEED_NUM_100G;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
case ICE_AQ_LINK_SPEED_UNKNOWN:
PMD_DRV_LOG(ERR, "Unknown link speed");
- link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
default:
PMD_DRV_LOG(ERR, "None link speed");
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
out:
ice_atomic_write_link_status(dev, &link);
@@ -3767,10 +3767,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (frame_size > ICE_ETH_MAX_LEN)
dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -4161,15 +4161,15 @@ ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ice_vsi_config_vlan_filter(vsi, true);
else
ice_vsi_config_vlan_filter(vsi, false);
}
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
ice_vsi_config_vlan_stripping(vsi, true);
else
ice_vsi_config_vlan_stripping(vsi, false);
@@ -5244,7 +5244,7 @@ ice_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ice_create_tunnel(hw, TNL_VXLAN, udp_tunnel->udp_port);
break;
default:
@@ -5268,7 +5268,7 @@ ice_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ice_destroy_tunnel(hw, udp_tunnel->udp_port, 0);
break;
default:
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index b4bf651c1c7f..1c4bc4e30349 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -115,19 +115,19 @@
ICE_FLAG_VF_MAC_BY_PF)
#define ICE_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L2_PAYLOAD)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L2_PAYLOAD)
/**
* The overhead from MTU to max frame size.
diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index 54d14dfcddfb..beb863f70568 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -39,27 +39,27 @@
#define ICE_IPV4_PROT BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_PROT)
#define ICE_IPV6_PROT BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PROT)
-#define VALID_RSS_IPV4_L4 (ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4 (RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
-#define VALID_RSS_IPV6_L4 (ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4 (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
-#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4 (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6 (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
VALID_RSS_IPV6_L4)
#define VALID_RSS_L3 (VALID_RSS_IPV4 | VALID_RSS_IPV6)
#define VALID_RSS_L4 (VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
-#define VALID_RSS_ATTR (ETH_RSS_L3_SRC_ONLY | \
- ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | \
- ETH_RSS_L4_DST_ONLY | \
- ETH_RSS_L2_SRC_ONLY | \
- ETH_RSS_L2_DST_ONLY | \
+#define VALID_RSS_ATTR (RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY | \
+ RTE_ETH_RSS_L2_SRC_ONLY | \
+ RTE_ETH_RSS_L2_DST_ONLY | \
RTE_ETH_RSS_L3_PRE32 | \
RTE_ETH_RSS_L3_PRE48 | \
RTE_ETH_RSS_L3_PRE64)
@@ -373,80 +373,80 @@ struct ice_rss_hash_cfg eth_tmplt = {
};
/* IPv4 */
-#define ICE_RSS_TYPE_ETH_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4)
+#define ICE_RSS_TYPE_ETH_IPV4 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4)
#define ICE_RSS_TYPE_ETH_IPV4_UDP (ICE_RSS_TYPE_ETH_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP)
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
#define ICE_RSS_TYPE_ETH_IPV4_TCP (ICE_RSS_TYPE_ETH_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP)
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP)
#define ICE_RSS_TYPE_ETH_IPV4_SCTP (ICE_RSS_TYPE_ETH_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
-#define ICE_RSS_TYPE_IPV4 ETH_RSS_IPV4
-#define ICE_RSS_TYPE_IPV4_UDP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP)
-#define ICE_RSS_TYPE_IPV4_TCP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP)
-#define ICE_RSS_TYPE_IPV4_SCTP (ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
+#define ICE_RSS_TYPE_IPV4 RTE_ETH_RSS_IPV4
+#define ICE_RSS_TYPE_IPV4_UDP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define ICE_RSS_TYPE_IPV4_TCP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define ICE_RSS_TYPE_IPV4_SCTP (RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
/* IPv6 */
-#define ICE_RSS_TYPE_ETH_IPV6 (ETH_RSS_ETH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_ETH_IPV6_FRAG (ETH_RSS_ETH | ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6 (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6_FRAG (RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6)
#define ICE_RSS_TYPE_ETH_IPV6_UDP (ICE_RSS_TYPE_ETH_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define ICE_RSS_TYPE_ETH_IPV6_TCP (ICE_RSS_TYPE_ETH_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
#define ICE_RSS_TYPE_ETH_IPV6_SCTP (ICE_RSS_TYPE_ETH_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
-#define ICE_RSS_TYPE_IPV6 ETH_RSS_IPV6
-#define ICE_RSS_TYPE_IPV6_UDP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP)
-#define ICE_RSS_TYPE_IPV6_TCP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
-#define ICE_RSS_TYPE_IPV6_SCTP (ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+#define ICE_RSS_TYPE_IPV6 RTE_ETH_RSS_IPV6
+#define ICE_RSS_TYPE_IPV6_UDP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define ICE_RSS_TYPE_IPV6_TCP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define ICE_RSS_TYPE_IPV6_SCTP (RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
/* VLAN IPV4 */
#define ICE_RSS_TYPE_VLAN_IPV4 (ICE_RSS_TYPE_IPV4 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
- ETH_RSS_FRAG_IPV4)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+ RTE_ETH_RSS_FRAG_IPV4)
#define ICE_RSS_TYPE_VLAN_IPV4_UDP (ICE_RSS_TYPE_IPV4_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV4_TCP (ICE_RSS_TYPE_IPV4_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV4_SCTP (ICE_RSS_TYPE_IPV4_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* VLAN IPv6 */
#define ICE_RSS_TYPE_VLAN_IPV6 (ICE_RSS_TYPE_IPV6 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV6_FRAG (ICE_RSS_TYPE_IPV6 | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
- ETH_RSS_FRAG_IPV6)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+ RTE_ETH_RSS_FRAG_IPV6)
#define ICE_RSS_TYPE_VLAN_IPV6_UDP (ICE_RSS_TYPE_IPV6_UDP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV6_TCP (ICE_RSS_TYPE_IPV6_TCP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
#define ICE_RSS_TYPE_VLAN_IPV6_SCTP (ICE_RSS_TYPE_IPV6_SCTP | \
- ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+ RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
/* GTPU IPv4 */
#define ICE_RSS_TYPE_GTPU_IPV4 (ICE_RSS_TYPE_IPV4 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV4_UDP (ICE_RSS_TYPE_IPV4_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV4_TCP (ICE_RSS_TYPE_IPV4_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* GTPU IPv6 */
#define ICE_RSS_TYPE_GTPU_IPV6 (ICE_RSS_TYPE_IPV6 | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV6_UDP (ICE_RSS_TYPE_IPV6_UDP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
#define ICE_RSS_TYPE_GTPU_IPV6_TCP (ICE_RSS_TYPE_IPV6_TCP | \
- ETH_RSS_GTPU)
+ RTE_ETH_RSS_GTPU)
/* PPPOE */
-#define ICE_RSS_TYPE_PPPOE (ETH_RSS_ETH | ETH_RSS_PPPOE)
+#define ICE_RSS_TYPE_PPPOE (RTE_ETH_RSS_ETH | RTE_ETH_RSS_PPPOE)
/* PPPOE IPv4 */
#define ICE_RSS_TYPE_PPPOE_IPV4 (ICE_RSS_TYPE_IPV4 | \
@@ -465,17 +465,17 @@ struct ice_rss_hash_cfg eth_tmplt = {
ICE_RSS_TYPE_PPPOE)
/* ESP, AH, L2TPV3 and PFCP */
-#define ICE_RSS_TYPE_IPV4_ESP (ETH_RSS_ESP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_ESP (ETH_RSS_ESP | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_AH (ETH_RSS_AH | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_AH (ETH_RSS_AH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_L2TPV3 (ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_PFCP (ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_ESP (RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_AH (RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_L2TPV3 (RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_PFCP (RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
/* MAC */
-#define ICE_RSS_TYPE_ETH ETH_RSS_ETH
+#define ICE_RSS_TYPE_ETH RTE_ETH_RSS_ETH
/**
* Supported pattern for hash.
@@ -640,51 +640,51 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
uint64_t *hash_flds = &hash_cfg->hash_flds;
if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH) {
- if (!(rss_type & ETH_RSS_ETH))
+ if (!(rss_type & RTE_ETH_RSS_ETH))
*hash_flds &= ~ICE_FLOW_HASH_ETH;
- if (rss_type & ETH_RSS_L2_SRC_ONLY)
+ if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_DA));
- else if (rss_type & ETH_RSS_L2_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_SA));
*addl_hdrs &= ~ICE_FLOW_SEG_HDR_ETH;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH_NON_IP) {
- if (rss_type & ETH_RSS_ETH)
+ if (rss_type & RTE_ETH_RSS_ETH)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_TYPE);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_VLAN) {
- if (rss_type & ETH_RSS_C_VLAN)
+ if (rss_type & RTE_ETH_RSS_C_VLAN)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_C_VLAN);
- else if (rss_type & ETH_RSS_S_VLAN)
+ else if (rss_type & RTE_ETH_RSS_S_VLAN)
*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_S_VLAN);
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_PPPOE) {
- if (!(rss_type & ETH_RSS_PPPOE))
+ if (!(rss_type & RTE_ETH_RSS_PPPOE))
*hash_flds &= ~ICE_FLOW_HASH_PPPOE_SESS_ID;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV4) {
if (rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV4) {
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
*addl_hdrs |= ICE_FLOW_SEG_HDR_IPV_FRAG;
*addl_hdrs &= ~(ICE_FLOW_SEG_HDR_IPV_OTHER);
*hash_flds |=
BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID);
}
- if (rss_type & ETH_RSS_L3_SRC_ONLY)
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_DA));
- else if (rss_type & ETH_RSS_L3_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_SA));
else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY))
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_IPV4;
} else {
*hash_flds &= ~ICE_FLOW_HASH_IPV4;
@@ -693,30 +693,30 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV6) {
if (rss_type &
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_FRAG_IPV6)
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
*hash_flds |=
BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_ID);
- if (rss_type & ETH_RSS_L3_SRC_ONLY)
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
- else if (rss_type & ETH_RSS_L3_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
else if (rss_type &
- (ETH_RSS_L4_SRC_ONLY |
- ETH_RSS_L4_DST_ONLY))
+ (RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_IPV6;
} else {
*hash_flds &= ~ICE_FLOW_HASH_IPV6;
}
if (rss_type & RTE_ETH_RSS_L3_PRE32) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_SA));
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_DA));
} else {
@@ -725,10 +725,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
}
}
if (rss_type & RTE_ETH_RSS_L3_PRE48) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_SA));
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_DA));
} else {
@@ -737,10 +737,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
}
}
if (rss_type & RTE_ETH_RSS_L3_PRE64) {
- if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+ if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_SA));
- } else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+ } else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_DA));
} else {
@@ -752,15 +752,15 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
if (*addl_hdrs & ICE_FLOW_SEG_HDR_UDP) {
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_DST_PORT));
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_SRC_PORT));
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
} else {
*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
@@ -769,15 +769,15 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
if (*addl_hdrs & ICE_FLOW_SEG_HDR_TCP) {
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_DST_PORT));
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_SRC_PORT));
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
} else {
*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
@@ -786,15 +786,15 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
if (*addl_hdrs & ICE_FLOW_SEG_HDR_SCTP) {
if (rss_type &
- (ETH_RSS_NONFRAG_IPV4_SCTP |
- ETH_RSS_NONFRAG_IPV6_SCTP)) {
- if (rss_type & ETH_RSS_L4_SRC_ONLY)
+ (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+ if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_DST_PORT));
- else if (rss_type & ETH_RSS_L4_DST_ONLY)
+ else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT));
else if (rss_type &
- (ETH_RSS_L3_SRC_ONLY |
- ETH_RSS_L3_DST_ONLY))
+ (RTE_ETH_RSS_L3_SRC_ONLY |
+ RTE_ETH_RSS_L3_DST_ONLY))
*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
} else {
*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
@@ -802,22 +802,22 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_L2TPV3) {
- if (!(rss_type & ETH_RSS_L2TPV3))
+ if (!(rss_type & RTE_ETH_RSS_L2TPV3))
*hash_flds &= ~ICE_FLOW_HASH_L2TPV3_SESS_ID;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_ESP) {
- if (!(rss_type & ETH_RSS_ESP))
+ if (!(rss_type & RTE_ETH_RSS_ESP))
*hash_flds &= ~ICE_FLOW_HASH_ESP_SPI;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_AH) {
- if (!(rss_type & ETH_RSS_AH))
+ if (!(rss_type & RTE_ETH_RSS_AH))
*hash_flds &= ~ICE_FLOW_HASH_AH_SPI;
}
if (*addl_hdrs & ICE_FLOW_SEG_HDR_PFCP_SESSION) {
- if (!(rss_type & ETH_RSS_PFCP))
+ if (!(rss_type & RTE_ETH_RSS_PFCP))
*hash_flds &= ~ICE_FLOW_HASH_PFCP_SEID;
}
}
@@ -851,7 +851,7 @@ ice_refine_hash_cfg_gtpu(struct ice_rss_hash_cfg *hash_cfg,
uint64_t *hash_flds = &hash_cfg->hash_flds;
/* update hash field for gtpu eh/gtpu dwn/gtpu up. */
- if (!(rss_type & ETH_RSS_GTPU))
+ if (!(rss_type & RTE_ETH_RSS_GTPU))
return;
if (*addl_hdrs & ICE_FLOW_SEG_HDR_GTPU_DWN)
@@ -873,10 +873,10 @@ static void ice_refine_hash_cfg(struct ice_rss_hash_cfg *hash_cfg,
}
static uint64_t invalid_rss_comb[] = {
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
- ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
- ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
RTE_ETH_RSS_L3_PRE40 |
RTE_ETH_RSS_L3_PRE56 |
RTE_ETH_RSS_L3_PRE96
@@ -888,9 +888,9 @@ struct rss_attr_type {
};
static struct rss_attr_type rss_attr_to_valid_type[] = {
- {ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY, ETH_RSS_ETH},
- {ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
- {ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
+ {RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY, RTE_ETH_RSS_ETH},
+ {RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY, VALID_RSS_L3},
+ {RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY, VALID_RSS_L4},
/* current ipv6 prefix only supports prefix 64 bits*/
{RTE_ETH_RSS_L3_PRE32, VALID_RSS_IPV6},
{RTE_ETH_RSS_L3_PRE48, VALID_RSS_IPV6},
@@ -909,16 +909,16 @@ ice_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
* hash function.
*/
if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
- if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
- ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+ if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
return true;
if (!(rss_type &
- (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
- ETH_RSS_FRAG_IPV4 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
return true;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5d7ab4f047ee..63c07e001f07 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -280,7 +280,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
dev_data->dev_conf.rxmode.max_rx_pkt_len);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
@@ -1103,7 +1103,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = vsi->base_queue + queue_idx;
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -2780,7 +2780,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
for (i = 0; i < txq->tx_rs_thresh; i++)
rte_prefetch0((txep + i)->mbuf);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
rte_mempool_put(txep->mbuf->pool, txep->mbuf);
txep->mbuf = NULL;
@@ -3254,7 +3254,7 @@ ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq)
/* Use a simple Tx queue if possible (only fast free is allowed) */
ad->tx_simple_allowed =
(txq->offloads ==
- (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+ (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
txq->tx_rs_thresh >= ICE_TX_MAX_BURST);
if (ad->tx_simple_allowed)
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 9725ac018043..8c870354619e 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -473,7 +473,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
_mm_load_si128
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 5bba9887d296..6d2038975830 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -584,7 +584,7 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
_mm_load_si128
@@ -994,7 +994,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
txep = (void *)txq->sw_ring;
txep += txq->tx_next_dd - (n - 1);
- if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
struct rte_mempool *mp = txep[0].mbuf->pool;
void **cache_objs;
struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 2d8ef7dc8a93..a5b573c22da2 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -248,23 +248,23 @@ ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
}
#define ICE_TX_NO_VECTOR_FLAGS ( \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
#define ICE_TX_VECTOR_OFFLOAD ( \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define ICE_RX_VECTOR_OFFLOAD ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_VLAN | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define ICE_VECTOR_PATH 0
#define ICE_VECTOR_OFFLOAD_PATH 1
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 653bd28b417c..117494131f32 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -479,7 +479,7 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
* will cause performance drop to get into this context.
*/
if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_RSS_HASH) {
+ RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh3 =
_mm_load_si128
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 224a0954836b..d8f5a786efac 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -314,8 +314,8 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (rx_mq_mode != ETH_MQ_RX_NONE &&
- rx_mq_mode != ETH_MQ_RX_RSS) {
+ if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
/* RSS together with VMDq not supported*/
PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
rx_mq_mode);
@@ -325,7 +325,7 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
/* To no break software that set invalid mode, only display
* warning if invalid mode is used.
*/
- if (tx_mq_mode != ETH_MQ_TX_NONE)
+ if (tx_mq_mode != RTE_ETH_MQ_TX_NONE)
PMD_INIT_LOG(WARNING,
"TX mode %d is not supported. Due to meaningless in this driver, just ignore",
tx_mq_mode);
@@ -341,8 +341,8 @@ eth_igc_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
ret = igc_check_mq_mode(dev);
if (ret != 0)
@@ -480,12 +480,12 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
uint16_t duplex, speed;
hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
link.link_duplex = (duplex == FULL_DUPLEX) ?
- ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
+ RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
link.link_speed = speed;
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
if (speed == SPEED_2500) {
uint32_t tipg = IGC_READ_REG(hw, IGC_TIPG);
@@ -497,9 +497,9 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
}
} else {
link.link_speed = 0;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_status = ETH_LINK_DOWN;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -532,7 +532,7 @@ eth_igc_interrupt_action(struct rte_eth_dev *dev)
" Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id,
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
else
PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -979,18 +979,18 @@ eth_igc_start(struct rte_eth_dev *dev)
/* VLAN Offload Settings */
eth_igc_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK);
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK);
/* Setup link speed and duplex */
speeds = &dev->data->dev_conf.link_speeds;
- if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
hw->phy.autoneg_advertised = IGC_ALL_SPEED_DUPLEX_2500;
hw->mac.autoneg = 1;
} else {
int num_speeds = 0;
- if (*speeds & ETH_LINK_SPEED_FIXED) {
+ if (*speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_DRV_LOG(ERR,
"Force speed mode currently not supported");
igc_dev_clear_queues(dev);
@@ -1000,33 +1000,33 @@ eth_igc_start(struct rte_eth_dev *dev)
hw->phy.autoneg_advertised = 0;
hw->mac.autoneg = 1;
- if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G)) {
+ if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G)) {
num_speeds = -1;
goto error_invalid_config;
}
- if (*speeds & ETH_LINK_SPEED_10M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_10M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_10M) {
hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M_HD) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_100M) {
+ if (*speeds & RTE_ETH_LINK_SPEED_100M) {
hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_1G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_1G) {
hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
num_speeds++;
}
- if (*speeds & ETH_LINK_SPEED_2_5G) {
+ if (*speeds & RTE_ETH_LINK_SPEED_2_5G) {
hw->phy.autoneg_advertised |= ADVERTISE_2500_FULL;
num_speeds++;
}
@@ -1490,14 +1490,14 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mac_addrs = hw->mac.rar_entry_count;
dev_info->rx_offload_capa = IGC_RX_OFFLOAD_ALL;
dev_info->tx_offload_capa = IGC_TX_OFFLOAD_ALL;
- dev_info->rx_queue_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
dev_info->max_rx_queues = IGC_QUEUE_PAIRS_NUM;
dev_info->max_tx_queues = IGC_QUEUE_PAIRS_NUM;
dev_info->max_vmdq_pools = 0;
dev_info->hash_key_size = IGC_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = IGC_RSS_OFFLOAD_ALL;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1523,9 +1523,9 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = rx_desc_lim;
dev_info->tx_desc_lim = tx_desc_lim;
- dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G;
dev_info->max_mtu = dev_info->max_rx_pktlen - IGC_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -1603,11 +1603,11 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
rctl |= IGC_RCTL_LPE;
} else {
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
rctl &= ~IGC_RCTL_LPE;
}
IGC_WRITE_REG(hw, IGC_RCTL, rctl);
@@ -2165,13 +2165,13 @@ eth_igc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
rx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -2203,16 +2203,16 @@ eth_igc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
hw->fc.requested_mode = igc_fc_none;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
hw->fc.requested_mode = igc_fc_rx_pause;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
hw->fc.requested_mode = igc_fc_tx_pause;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
hw->fc.requested_mode = igc_fc_full;
break;
default:
@@ -2258,17 +2258,17 @@ eth_igc_rss_reta_update(struct rte_eth_dev *dev,
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
uint16_t i;
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR,
"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
- reta_size, ETH_RSS_RETA_SIZE_128);
+ reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
- RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+ RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
/* set redirection table */
- for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+ for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
union igc_rss_reta_reg reta, reg;
uint16_t idx, shift;
uint8_t j, mask;
@@ -2314,17 +2314,17 @@ eth_igc_rss_reta_query(struct rte_eth_dev *dev,
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
uint16_t i;
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR,
"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
- reta_size, ETH_RSS_RETA_SIZE_128);
+ reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
- RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+ RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
/* read redirection table */
- for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+ for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
union igc_rss_reta_reg reta;
uint16_t idx, shift;
uint8_t j, mask;
@@ -2393,23 +2393,23 @@ eth_igc_rss_hash_conf_get(struct rte_eth_dev *dev,
rss_hf = 0;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_EX)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP_EX)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP_EX)
- rss_hf |= ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
rss_conf->rss_hf |= rss_hf;
return 0;
@@ -2495,7 +2495,7 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
return 0;
if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) == 0)
goto write_ext_vlan;
/* Update maximum packet length */
@@ -2528,7 +2528,7 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
return 0;
if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) == 0)
goto write_ext_vlan;
/* Update maximum packet length */
@@ -2554,22 +2554,22 @@ eth_igc_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
igc_vlan_hw_strip_enable(dev);
else
igc_vlan_hw_strip_disable(dev);
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
igc_vlan_hw_filter_enable(dev);
else
igc_vlan_hw_filter_disable(dev);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
return igc_vlan_hw_extend_enable(dev);
else
return igc_vlan_hw_extend_disable(dev);
@@ -2587,7 +2587,7 @@ eth_igc_vlan_tpid_set(struct rte_eth_dev *dev,
uint32_t reg_val;
/* only outer TPID of double VLAN can be configured*/
- if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+ if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
reg_val = IGC_READ_REG(hw, IGC_VET);
reg_val = (reg_val & (~IGC_VET_EXT)) |
((uint32_t)tpid << IGC_VET_EXT_SHIFT);
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 7b6c209df3b6..066792b8a2d8 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -59,38 +59,38 @@ extern "C" {
#define IGC_TX_MAX_MTU_SEG UINT8_MAX
#define IGC_RX_OFFLOAD_ALL ( \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_VLAN_EXTEND | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_KEEP_CRC | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define IGC_TX_OFFLOAD_ALL ( \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_UDP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_UDP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define IGC_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define IGC_MAX_ETQF_FILTERS 3 /* etqf(3) is used for 1588 */
#define IGC_ETQF_FILTER_1588 3
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd220..82e7e084b41d 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -127,7 +127,7 @@ struct igc_rx_queue {
uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise. */
uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En. */
uint32_t flags; /**< RX flags. */
- uint64_t offloads; /**< offloads of DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_RX_OFFLOAD_* */
};
/** Offload features */
@@ -209,7 +209,7 @@ struct igc_tx_queue {
/**< Start context position for transmit queue. */
struct igc_advctx_info ctx_cache[IGC_CTX_NUM];
/**< Hardware context history.*/
- uint64_t offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
};
static inline uint64_t
@@ -866,23 +866,23 @@ igc_hw_rss_hash_set(struct igc_hw *hw, struct rte_eth_rss_conf *rss_conf)
/* Set configured hashing protocols in MRQC register */
rss_hf = rss_conf->rss_hf;
mrqc = IGC_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= IGC_MRQC_RSS_FIELD_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV4_TCP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6;
- if (rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP;
- if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV4_UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP;
- if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP_EX;
IGC_WRITE_REG(hw, IGC_MRQC, mrqc);
}
@@ -1056,10 +1056,10 @@ igc_dev_mq_rx_configure(struct rte_eth_dev *dev)
}
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
igc_rss_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/*
* configure RSS register for following,
* then disable the RSS logic
@@ -1099,7 +1099,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
/* Configure support of jumbo frames, if any. */
- if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
rctl |= IGC_RCTL_LPE;
/*
@@ -1130,7 +1130,7 @@ igc_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure
*/
- rxq->crc_len = (offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+ rxq->crc_len = (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
RTE_ETHER_CRC_LEN : 0;
bus_addr = rxq->rx_ring_phys_addr;
@@ -1196,7 +1196,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RXDCTL(rxq->reg_idx), rxdctl);
}
- if (offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
dev->data->scattered_rx = 1;
if (dev->data->scattered_rx) {
@@ -1240,20 +1240,20 @@ igc_rx_init(struct rte_eth_dev *dev)
rxcsum |= IGC_RXCSUM_PCSD;
/* Enable both L3/L4 rx checksum offload */
- if (offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+ if (offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
rxcsum |= IGC_RXCSUM_IPOFL;
else
rxcsum &= ~IGC_RXCSUM_IPOFL;
if (offloads &
- (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
rxcsum |= IGC_RXCSUM_TUOFL;
- offloads |= DEV_RX_OFFLOAD_SCTP_CKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_SCTP_CKSUM;
} else {
rxcsum &= ~IGC_RXCSUM_TUOFL;
}
- if (offloads & DEV_RX_OFFLOAD_SCTP_CKSUM)
+ if (offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM)
rxcsum |= IGC_RXCSUM_CRCOFL;
else
rxcsum &= ~IGC_RXCSUM_CRCOFL;
@@ -1261,7 +1261,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RXCSUM, rxcsum);
/* Setup the Receive Control Register. */
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rctl &= ~IGC_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
else
rctl |= IGC_RCTL_SECRC; /* Strip Ethernet CRC. */
@@ -1298,12 +1298,12 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RDT(rxq->reg_idx), rxq->nb_rx_desc - 1);
dvmolr = IGC_READ_REG(hw, IGC_DVMOLR(rxq->reg_idx));
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
dvmolr |= IGC_DVMOLR_STRVLAN;
else
dvmolr &= ~IGC_DVMOLR_STRVLAN;
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
dvmolr &= ~IGC_DVMOLR_STRCRC;
else
dvmolr |= IGC_DVMOLR_STRCRC;
@@ -2272,10 +2272,10 @@ eth_igc_vlan_strip_queue_set(struct rte_eth_dev *dev,
reg_val = IGC_READ_REG(hw, IGC_DVMOLR(rx_queue_id));
if (on) {
reg_val |= IGC_DVMOLR_STRVLAN;
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
reg_val &= ~(IGC_DVMOLR_STRVLAN | IGC_DVMOLR_HIDVLAN);
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
IGC_WRITE_REG(hw, IGC_DVMOLR(rx_queue_id), reg_val);
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index e6207939665e..824341fee3f6 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -280,37 +280,37 @@ ionic_dev_link_update(struct rte_eth_dev *eth_dev,
memset(&link, 0, sizeof(link));
if (adapter->idev.port_info->config.an_enable) {
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
}
if (!adapter->link_up ||
!(lif->state & IONIC_LIF_F_UP)) {
/* Interface is down */
- link.link_status = ETH_LINK_DOWN;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
} else {
/* Interface is up */
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (adapter->link_speed) {
case 10000:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case 25000:
- link.link_speed = ETH_SPEED_NUM_25G;
+ link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
case 40000:
- link.link_speed = ETH_SPEED_NUM_40G;
+ link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case 50000:
- link.link_speed = ETH_SPEED_NUM_50G;
+ link.link_speed = RTE_ETH_SPEED_NUM_50G;
break;
case 100000:
- link.link_speed = ETH_SPEED_NUM_100G;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
}
@@ -397,17 +397,17 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
dev_info->flow_type_rss_offloads = IONIC_ETH_RSS_OFFLOAD_ALL;
dev_info->speed_capa =
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
/*
* Per-queue capabilities
* RTE does not support disabling a feature on a queue if it is
* enabled globally on the device. Thus the driver does not advertise
- * capabilities like DEV_TX_OFFLOAD_IPV4_CKSUM as per-queue even
+ * capabilities like RTE_ETH_TX_OFFLOAD_IPV4_CKSUM as per-queue even
* though the driver would be otherwise capable of disabling it on
* a per-queue basis.
*/
@@ -421,25 +421,25 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
*/
dev_info->rx_offload_capa = dev_info->rx_queue_offload_capa |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
0;
dev_info->tx_offload_capa = dev_info->tx_queue_offload_capa |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
0;
dev_info->rx_desc_lim = rx_desc_lim;
@@ -474,9 +474,9 @@ ionic_flow_ctrl_get(struct rte_eth_dev *eth_dev,
fc_conf->autoneg = 0;
if (idev->port_info->config.pause_type)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
}
return 0;
@@ -498,14 +498,14 @@ ionic_flow_ctrl_set(struct rte_eth_dev *eth_dev,
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
pause_type = IONIC_PORT_PAUSE_TYPE_NONE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
pause_type = IONIC_PORT_PAUSE_TYPE_LINK;
break;
- case RTE_FC_RX_PAUSE:
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
return -ENOTSUP;
}
@@ -629,17 +629,17 @@ ionic_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
IONIC_RSS_HASH_KEY_SIZE);
if (lif->rss_types & IONIC_RSS_TYPE_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (lif->rss_types & IONIC_RSS_TYPE_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (lif->rss_types & IONIC_RSS_TYPE_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (lif->rss_types & IONIC_RSS_TYPE_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (lif->rss_types & IONIC_RSS_TYPE_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (lif->rss_types & IONIC_RSS_TYPE_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
rss_conf->rss_hf = rss_hf;
@@ -671,17 +671,17 @@ ionic_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
if (!lif->rss_ind_tbl)
return -EINVAL;
- if (rss_conf->rss_hf & ETH_RSS_IPV4)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
rss_types |= IONIC_RSS_TYPE_IPV4;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
rss_types |= IONIC_RSS_TYPE_IPV4_TCP;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
rss_types |= IONIC_RSS_TYPE_IPV4_UDP;
- if (rss_conf->rss_hf & ETH_RSS_IPV6)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
rss_types |= IONIC_RSS_TYPE_IPV6;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
rss_types |= IONIC_RSS_TYPE_IPV6_TCP;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
rss_types |= IONIC_RSS_TYPE_IPV6_UDP;
ionic_lif_rss_config(lif, rss_types, key, NULL);
@@ -853,15 +853,15 @@ ionic_dev_configure(struct rte_eth_dev *eth_dev)
static inline uint32_t
ionic_parse_link_speeds(uint16_t link_speeds)
{
- if (link_speeds & ETH_LINK_SPEED_100G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100G)
return 100000;
- else if (link_speeds & ETH_LINK_SPEED_50G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_50G)
return 50000;
- else if (link_speeds & ETH_LINK_SPEED_40G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_40G)
return 40000;
- else if (link_speeds & ETH_LINK_SPEED_25G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_25G)
return 25000;
- else if (link_speeds & ETH_LINK_SPEED_10G)
+ else if (link_speeds & RTE_ETH_LINK_SPEED_10G)
return 10000;
else
return 0;
@@ -885,12 +885,12 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
IONIC_PRINT_CALL();
allowed_speeds =
- ETH_LINK_SPEED_FIXED |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ RTE_ETH_LINK_SPEED_FIXED |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
if (dev_conf->link_speeds & ~allowed_speeds) {
IONIC_PRINT(ERR, "Invalid link setting");
@@ -907,7 +907,7 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
}
/* Configure link */
- an_enable = (dev_conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+ an_enable = (dev_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
ionic_dev_cmd_port_autoneg(idev, an_enable);
err = ionic_dev_cmd_wait_check(idev, IONIC_DEVCMD_TIMEOUT);
diff --git a/drivers/net/ionic/ionic_ethdev.h b/drivers/net/ionic/ionic_ethdev.h
index 6cbcd0f825a3..652f28c97d57 100644
--- a/drivers/net/ionic/ionic_ethdev.h
+++ b/drivers/net/ionic/ionic_ethdev.h
@@ -8,12 +8,12 @@
#include <rte_ethdev.h>
#define IONIC_ETH_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define IONIC_ETH_DEV_TO_LIF(eth_dev) ((struct ionic_lif *) \
(eth_dev)->data->dev_private)
diff --git a/drivers/net/ionic/ionic_lif.c b/drivers/net/ionic/ionic_lif.c
index 431eda777b78..d4eb6c1d78be 100644
--- a/drivers/net/ionic/ionic_lif.c
+++ b/drivers/net/ionic/ionic_lif.c
@@ -1688,12 +1688,12 @@ ionic_lif_configure_vlan_offload(struct ionic_lif *lif, int mask)
/*
* IONIC_ETH_HW_VLAN_RX_FILTER cannot be turned off, so
- * set DEV_RX_OFFLOAD_VLAN_FILTER and ignore ETH_VLAN_FILTER_MASK
+ * set RTE_ETH_RX_OFFLOAD_VLAN_FILTER and ignore RTE_ETH_VLAN_FILTER_MASK
*/
- rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
lif->features |= IONIC_ETH_HW_VLAN_RX_STRIP;
else
lif->features &= ~IONIC_ETH_HW_VLAN_RX_STRIP;
@@ -1733,19 +1733,19 @@ ionic_lif_configure(struct ionic_lif *lif)
/*
* NB: While it is true that RSS_HASH is always enabled on ionic,
* setting this flag unconditionally causes problems in DTS.
- * rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ * rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
*/
/* RX per-port */
- if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM ||
- rxmode->offloads & DEV_RX_OFFLOAD_UDP_CKSUM ||
- rxmode->offloads & DEV_RX_OFFLOAD_TCP_CKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM ||
+ rxmode->offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM ||
+ rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
lif->features |= IONIC_ETH_HW_RX_CSUM;
else
lif->features &= ~IONIC_ETH_HW_RX_CSUM;
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
lif->features |= IONIC_ETH_HW_RX_SG;
lif->eth_dev->data->scattered_rx = 1;
} else {
@@ -1754,30 +1754,30 @@ ionic_lif_configure(struct ionic_lif *lif)
}
/* Covers VLAN_STRIP */
- ionic_lif_configure_vlan_offload(lif, ETH_VLAN_STRIP_MASK);
+ ionic_lif_configure_vlan_offload(lif, RTE_ETH_VLAN_STRIP_MASK);
/* TX per-port */
- if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
lif->features |= IONIC_ETH_HW_TX_CSUM;
else
lif->features &= ~IONIC_ETH_HW_TX_CSUM;
- if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
lif->features |= IONIC_ETH_HW_VLAN_TX_TAG;
else
lif->features &= ~IONIC_ETH_HW_VLAN_TX_TAG;
- if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
lif->features |= IONIC_ETH_HW_TX_SG;
else
lif->features &= ~IONIC_ETH_HW_TX_SG;
- if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
lif->features |= IONIC_ETH_HW_TSO;
lif->features |= IONIC_ETH_HW_TSO_IPV6;
lif->features |= IONIC_ETH_HW_TSO_ECN;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index b83ea1bcaa6a..0c1f6113d0e9 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -204,11 +204,11 @@ ionic_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id,
txq->flags |= IONIC_QCQ_F_DEFERRED;
/* Convert the offload flags into queue flags */
- if (offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
txq->flags |= IONIC_QCQ_F_CSUM_L3;
- if (offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+ if (offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
txq->flags |= IONIC_QCQ_F_CSUM_TCP;
- if (offloads & DEV_TX_OFFLOAD_UDP_CKSUM)
+ if (offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)
txq->flags |= IONIC_QCQ_F_CSUM_UDP;
eth_dev->data->tx_queues[tx_queue_id] = txq;
@@ -745,11 +745,11 @@ ionic_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
/*
* Note: the interface does not currently support
- * DEV_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
+ * RTE_ETH_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
* when the adapter will be able to keep the CRC and subtract
* it to the length for all received packets:
* if (eth_dev->data->dev_conf.rxmode.offloads &
- * DEV_RX_OFFLOAD_KEEP_CRC)
+ * RTE_ETH_RX_OFFLOAD_KEEP_CRC)
* rxq->crc_len = ETHER_CRC_LEN;
*/
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 589d9fa5877d..2f6df2c2f6b8 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -50,11 +50,11 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
dev_info->speed_capa =
(hw->retimer.mac_type ==
IFPGA_RAWDEV_RETIMER_MAC_TYPE_10GE_XFI) ?
- ETH_LINK_SPEED_10G :
+ RTE_ETH_LINK_SPEED_10G :
((hw->retimer.mac_type ==
IFPGA_RAWDEV_RETIMER_MAC_TYPE_25GE_25GAUI) ?
- ETH_LINK_SPEED_25G :
- ETH_LINK_SPEED_AUTONEG);
+ RTE_ETH_LINK_SPEED_25G :
+ RTE_ETH_LINK_SPEED_AUTONEG);
dev_info->max_rx_queues = 1;
dev_info->max_tx_queues = 1;
@@ -67,31 +67,31 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
};
dev_info->rx_queue_offload_capa = 0;
dev_info->rx_offload_capa =
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+
+ dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
dev_info->tx_queue_offload_capa;
dev_info->dev_capa =
@@ -2410,10 +2410,10 @@ ipn3ke_update_link(struct rte_rawdev *rawdev,
(uint64_t *)&link_speed);
switch (link_speed) {
case IFPGA_RAWDEV_LINK_SPEED_10GB:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case IFPGA_RAWDEV_LINK_SPEED_25GB:
- link->link_speed = ETH_SPEED_NUM_25G;
+ link->link_speed = RTE_ETH_SPEED_NUM_25G;
break;
default:
IPN3KE_AFU_PMD_ERR("Unknown link speed info %u", link_speed);
@@ -2471,9 +2471,9 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_autoneg = !(ethdev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
rawdev = hw->rawdev;
ipn3ke_update_link(rawdev, rpst->port_id, &link);
@@ -2529,9 +2529,9 @@ ipn3ke_rpst_link_check(struct ipn3ke_rpst *rpst)
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link.link_autoneg = !(rpst->ethdev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
rawdev = hw->rawdev;
ipn3ke_update_link(rawdev, rpst->port_id, &link);
@@ -2803,10 +2803,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
if (frame_size > IPN3KE_ETH_MAX_LEN)
dev_data->dev_conf.rxmode.offloads |=
- (uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME);
+ (uint64_t)(RTE_ETH_RX_OFFLOAD_JUMBO_FRAME);
else
dev_data->dev_conf.rxmode.offloads &=
- (uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME);
+ (uint64_t)(~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME);
dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b5371568b54d..3707daf4760f 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1865,7 +1865,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
qinq &= IXGBE_DMATXCTL_GDV;
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
+ case RTE_ETH_VLAN_TYPE_INNER:
if (qinq) {
reg = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
reg = (reg & (~IXGBE_VLNCTRL_VET)) | (uint32_t)tpid;
@@ -1880,7 +1880,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
" by single VLAN");
}
break;
- case ETH_VLAN_TYPE_OUTER:
+ case RTE_ETH_VLAN_TYPE_OUTER:
if (qinq) {
/* Only the high 16-bits is valid */
IXGBE_WRITE_REG(hw, IXGBE_EXVET, (uint32_t)tpid <<
@@ -1967,10 +1967,10 @@ ixgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
if (on) {
rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
rxq->vlan_flags = PKT_RX_VLAN;
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
@@ -2091,7 +2091,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
if (hw->mac.type == ixgbe_mac_82598EB) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
ctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
ctrl |= IXGBE_VLNCTRL_VME;
IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, ctrl);
@@ -2108,7 +2108,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
ctrl = IXGBE_READ_REG(hw, IXGBE_RXDCTL(rxq->reg_idx));
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
ctrl |= IXGBE_RXDCTL_VME;
on = TRUE;
} else {
@@ -2130,17 +2130,17 @@ ixgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
struct ixgbe_rx_queue *rxq;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
rxmode = &dev->data->dev_conf.rxmode;
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
else
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
}
@@ -2151,19 +2151,19 @@ ixgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
ixgbe_vlan_hw_strip_config(dev);
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ixgbe_vlan_hw_filter_enable(dev);
else
ixgbe_vlan_hw_filter_disable(dev);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
ixgbe_vlan_hw_extend_enable(dev);
else
ixgbe_vlan_hw_extend_disable(dev);
@@ -2202,10 +2202,10 @@ ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
switch (nb_rx_q) {
case 1:
case 2:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
break;
case 4:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
break;
default:
return -EINVAL;
@@ -2229,18 +2229,18 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* check multi-queue mode */
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
break;
- case ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
PMD_INIT_LOG(ERR, "SRIOV active,"
" unsupported mq_mode rx %d.",
dev_conf->rxmode.mq_mode);
return -EINVAL;
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
if (ixgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -2250,12 +2250,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
break;
- case ETH_MQ_RX_VMDQ_ONLY:
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_NONE:
/* if nothing mq mode configure, use default scheme */
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
break;
- default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+ default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
/* SRIOV only works in VMDq enable mode */
PMD_INIT_LOG(ERR, "SRIOV is active,"
" wrong mq_mode rx %d.",
@@ -2264,12 +2264,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
}
switch (dev_conf->txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
- dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+ dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
break;
- default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
- dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
+ default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
+ dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_ONLY;
break;
}
@@ -2284,13 +2284,13 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
} else {
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
" not supported.");
return -EINVAL;
}
/* check configuration for vmdb+dcb mode */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_conf *conf;
if (nb_rx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2299,15 +2299,15 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools must be %d or %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_tx_conf *conf;
if (nb_tx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2316,39 +2316,39 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools != %d and"
" nb_queue_pools != %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
/* For DCB mode check our configuration before we go further */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
const struct rte_eth_dcb_rx_conf *conf;
conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
const struct rte_eth_dcb_tx_conf *conf;
conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
@@ -2357,7 +2357,7 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
* When DCB/VT is off, maximum number of queues changes,
* except for 82598EB, which remains constant.
*/
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
hw->mac.type != ixgbe_mac_82598EB) {
if (nb_tx_q > IXGBE_NONE_MODE_TX_NB_QUEUES) {
PMD_INIT_LOG(ERR,
@@ -2381,8 +2381,8 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multipe queue mode checking */
ret = ixgbe_check_mq_mode(dev);
@@ -2627,15 +2627,15 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
goto error;
}
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = ixgbe_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
goto error;
}
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
/* Enable vlan filtering for VMDq */
ixgbe_vmdq_vlan_hw_filter_enable(dev);
}
@@ -2712,17 +2712,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
case ixgbe_mac_X550EM_a:
- allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G | ETH_LINK_SPEED_5G |
- ETH_LINK_SPEED_10G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G | RTE_ETH_LINK_SPEED_5G |
+ RTE_ETH_LINK_SPEED_10G;
if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
- allowed_speeds = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
break;
default:
- allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G;
}
link_speeds = &dev->data->dev_conf.link_speeds;
@@ -2736,7 +2736,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
}
speed = 0x0;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
switch (hw->mac.type) {
case ixgbe_mac_82598EB:
speed = IXGBE_LINK_SPEED_82598_AUTONEG;
@@ -2754,17 +2754,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
speed = IXGBE_LINK_SPEED_82599_AUTONEG;
}
} else {
- if (*link_speeds & ETH_LINK_SPEED_10G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
speed |= IXGBE_LINK_SPEED_10GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
speed |= IXGBE_LINK_SPEED_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_2_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
speed |= IXGBE_LINK_SPEED_2_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_1G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
speed |= IXGBE_LINK_SPEED_1GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_100M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
speed |= IXGBE_LINK_SPEED_100_FULL;
- if (*link_speeds & ETH_LINK_SPEED_10M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
speed |= IXGBE_LINK_SPEED_10_FULL;
}
@@ -3839,7 +3839,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
* When DCB/VT is off, maximum number of queues changes,
* except for 82598EB, which remains constant.
*/
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
hw->mac.type != ixgbe_mac_82598EB)
dev_info->max_tx_queues = IXGBE_NONE_MODE_TX_NB_QUEUES;
}
@@ -3849,9 +3849,9 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
if (hw->mac.type == ixgbe_mac_82598EB)
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
else
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->max_mtu = dev_info->max_rx_pktlen - IXGBE_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
dev_info->vmdq_queue_num = dev_info->max_rx_queues;
@@ -3890,21 +3890,21 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->reta_size = ixgbe_reta_size_get(hw->mac.type);
dev_info->flow_type_rss_offloads = IXGBE_RSS_OFFLOAD_ALL;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
- dev_info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
if (hw->mac.type == ixgbe_mac_X540 ||
hw->mac.type == ixgbe_mac_X540_vf ||
hw->mac.type == ixgbe_mac_X550 ||
hw->mac.type == ixgbe_mac_X550_vf) {
- dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
}
if (hw->mac.type == ixgbe_mac_X550) {
- dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
- dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
}
/* Driver-preferred Rx/Tx parameters */
@@ -3973,9 +3973,9 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
if (hw->mac.type == ixgbe_mac_82598EB)
- dev_info->max_vmdq_pools = ETH_16_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
else
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->rx_queue_offload_capa = ixgbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (ixgbe_get_rx_port_offloads(dev) |
dev_info->rx_queue_offload_capa);
@@ -4218,11 +4218,11 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
u32 esdp_reg;
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
hw->mac.get_link_status = true;
@@ -4244,8 +4244,8 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
diag = ixgbe_check_link(hw, &link_speed, &link_up, wait);
if (diag != 0) {
- link.link_speed = ETH_SPEED_NUM_100M;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -4281,37 +4281,37 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
return rte_eth_linkstatus_set(dev, &link);
}
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (link_speed) {
default:
case IXGBE_LINK_SPEED_UNKNOWN:
- link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
break;
case IXGBE_LINK_SPEED_10_FULL:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case IXGBE_LINK_SPEED_100_FULL:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case IXGBE_LINK_SPEED_1GB_FULL:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case IXGBE_LINK_SPEED_2_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_2_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case IXGBE_LINK_SPEED_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case IXGBE_LINK_SPEED_10GB_FULL:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
}
@@ -4528,7 +4528,7 @@ ixgbe_dev_link_status_print(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -4747,13 +4747,13 @@ ixgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
tx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -5199,11 +5199,11 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if (frame_size > IXGBE_ETH_MAX_LEN) {
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
} else {
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
}
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
@@ -5271,22 +5271,22 @@ ixgbevf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
dev->data->port_id);
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/*
* VF has no ability to enable/disable HW CRC
* Keep the persistent behavior the same as Host PF
*/
#ifndef RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
- conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#else
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#endif
@@ -5346,8 +5346,8 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
ixgbevf_set_vfta_all(dev, 1);
/* Set HW strip */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = ixgbevf_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -5581,10 +5581,10 @@ ixgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
int on = 0;
/* VF function only support hw strip feature, others are not support */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- on = !!(rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ on = !!(rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
ixgbevf_vlan_strip_queue_set(dev, i, on);
}
}
@@ -5715,12 +5715,12 @@ ixgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
return -ENOTSUP;
if (on) {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = ~0;
IXGBE_WRITE_REG(hw, IXGBE_UTA(i), ~0);
}
} else {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = 0;
IXGBE_WRITE_REG(hw, IXGBE_UTA(i), 0);
}
@@ -5734,15 +5734,15 @@ ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
{
uint32_t new_val = orig_val;
- if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
new_val |= IXGBE_VMOLR_AUPE;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
new_val |= IXGBE_VMOLR_ROMPE;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
new_val |= IXGBE_VMOLR_ROPE;
- if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
new_val |= IXGBE_VMOLR_BAM;
- if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
new_val |= IXGBE_VMOLR_MPE;
return new_val;
@@ -5753,8 +5753,8 @@ ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
#define IXGBE_MRCTL_DPME 0x04 /* Downlink Port Mirroring. */
#define IXGBE_MRCTL_VLME 0x08 /* VLAN Mirroring. */
#define IXGBE_INVALID_MIRROR_TYPE(mirror_type) \
- ((mirror_type) & ~(uint8_t)(ETH_MIRROR_VIRTUAL_POOL_UP | \
- ETH_MIRROR_UPLINK_PORT | ETH_MIRROR_DOWNLINK_PORT | ETH_MIRROR_VLAN))
+ ((mirror_type) & ~(uint8_t)(RTE_ETH_MIRROR_VIRTUAL_POOL_UP | \
+ RTE_ETH_MIRROR_UPLINK_PORT | RTE_ETH_MIRROR_DOWNLINK_PORT | RTE_ETH_MIRROR_VLAN))
static int
ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
@@ -5794,7 +5794,7 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
return -EINVAL;
}
- if (mirror_conf->rule_type & ETH_MIRROR_VLAN) {
+ if (mirror_conf->rule_type & RTE_ETH_MIRROR_VLAN) {
mirror_type |= IXGBE_MRCTL_VLME;
/* Check if vlan id is valid and find conresponding VLAN ID
* index in VLVF
@@ -5827,7 +5827,7 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
mr_info->mr_conf[rule_id].vlan.vlan_mask =
mirror_conf->vlan.vlan_mask;
- for (i = 0; i < ETH_VMDQ_MAX_VLAN_FILTERS; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_MAX_VLAN_FILTERS; i++) {
if (mirror_conf->vlan.vlan_mask & (1ULL << i))
mr_info->mr_conf[rule_id].vlan.vlan_id[i] =
mirror_conf->vlan.vlan_id[i];
@@ -5836,7 +5836,7 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
mv_lsb = 0;
mv_msb = 0;
mr_info->mr_conf[rule_id].vlan.vlan_mask = 0;
- for (i = 0; i < ETH_VMDQ_MAX_VLAN_FILTERS; i++)
+ for (i = 0; i < RTE_ETH_VMDQ_MAX_VLAN_FILTERS; i++)
mr_info->mr_conf[rule_id].vlan.vlan_id[i] = 0;
}
}
@@ -5845,7 +5845,7 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
* if enable pool mirror, write related pool mask register,if disable
* pool mirror, clear PFMRVM register
*/
- if (mirror_conf->rule_type & ETH_MIRROR_VIRTUAL_POOL_UP) {
+ if (mirror_conf->rule_type & RTE_ETH_MIRROR_VIRTUAL_POOL_UP) {
mirror_type |= IXGBE_MRCTL_VPME;
if (on) {
mp_lsb = mirror_conf->pool_mask & 0xFFFFFFFF;
@@ -5859,9 +5859,9 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
mr_info->mr_conf[rule_id].pool_mask = 0;
}
}
- if (mirror_conf->rule_type & ETH_MIRROR_UPLINK_PORT)
+ if (mirror_conf->rule_type & RTE_ETH_MIRROR_UPLINK_PORT)
mirror_type |= IXGBE_MRCTL_UPME;
- if (mirror_conf->rule_type & ETH_MIRROR_DOWNLINK_PORT)
+ if (mirror_conf->rule_type & RTE_ETH_MIRROR_DOWNLINK_PORT)
mirror_type |= IXGBE_MRCTL_DPME;
/* read mirror control register and recalculate it */
@@ -5882,13 +5882,13 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
IXGBE_WRITE_REG(hw, IXGBE_MRCTL(rule_id), mr_ctl);
/* write pool mirrror control register */
- if (mirror_conf->rule_type & ETH_MIRROR_VIRTUAL_POOL_UP) {
+ if (mirror_conf->rule_type & RTE_ETH_MIRROR_VIRTUAL_POOL_UP) {
IXGBE_WRITE_REG(hw, IXGBE_VMRVM(rule_id), mp_lsb);
IXGBE_WRITE_REG(hw, IXGBE_VMRVM(rule_id + rule_mr_offset),
mp_msb);
}
/* write VLAN mirrror control register */
- if (mirror_conf->rule_type & ETH_MIRROR_VLAN) {
+ if (mirror_conf->rule_type & RTE_ETH_MIRROR_VLAN) {
IXGBE_WRITE_REG(hw, IXGBE_VMRVLAN(rule_id), mv_lsb);
IXGBE_WRITE_REG(hw, IXGBE_VMRVLAN(rule_id + rule_mr_offset),
mv_msb);
@@ -6266,7 +6266,7 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
* register. MMW_SIZE=0x014 if 9728-byte jumbo is supported, otherwise
* set as 0x4.
*/
- if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
+ if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) &&
(rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE))
IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
IXGBE_MMW_SIZE_JUMBO_FRAME);
@@ -6942,15 +6942,15 @@ ixgbe_start_timecounters(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
switch (link.link_speed) {
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
incval = IXGBE_INCVAL_100;
shift = IXGBE_INCVAL_SHIFT_100;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
incval = IXGBE_INCVAL_1GB;
shift = IXGBE_INCVAL_SHIFT_1GB;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
default:
incval = IXGBE_INCVAL_10GB;
shift = IXGBE_INCVAL_SHIFT_10GB;
@@ -7361,16 +7361,16 @@ ixgbe_reta_size_get(enum ixgbe_mac_type mac_type) {
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
case ixgbe_mac_X550EM_a:
- return ETH_RSS_RETA_SIZE_512;
+ return RTE_ETH_RSS_RETA_SIZE_512;
case ixgbe_mac_X550_vf:
case ixgbe_mac_X550EM_x_vf:
case ixgbe_mac_X550EM_a_vf:
- return ETH_RSS_RETA_SIZE_64;
+ return RTE_ETH_RSS_RETA_SIZE_64;
case ixgbe_mac_X540_vf:
case ixgbe_mac_82599_vf:
return 0;
default:
- return ETH_RSS_RETA_SIZE_128;
+ return RTE_ETH_RSS_RETA_SIZE_128;
}
}
@@ -7380,10 +7380,10 @@ ixgbe_reta_reg_get(enum ixgbe_mac_type mac_type, uint16_t reta_idx) {
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
case ixgbe_mac_X550EM_a:
- if (reta_idx < ETH_RSS_RETA_SIZE_128)
+ if (reta_idx < RTE_ETH_RSS_RETA_SIZE_128)
return IXGBE_RETA(reta_idx >> 2);
else
- return IXGBE_ERETA((reta_idx - ETH_RSS_RETA_SIZE_128) >> 2);
+ return IXGBE_ERETA((reta_idx - RTE_ETH_RSS_RETA_SIZE_128) >> 2);
case ixgbe_mac_X550_vf:
case ixgbe_mac_X550EM_x_vf:
case ixgbe_mac_X550EM_a_vf:
@@ -7439,7 +7439,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
uint8_t nb_tcs;
uint8_t i, j;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
else
dcb_info->nb_tcs = 1;
@@ -7450,7 +7450,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
if (dcb_config->vt_mode) { /* vt is enabled*/
struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
for (j = 0; j < nb_tcs; j++) {
@@ -7474,9 +7474,9 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
} else { /* vt is disabled*/
struct rte_eth_dcb_rx_conf *rx_conf =
&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
- if (dcb_info->nb_tcs == ETH_4_TCS) {
+ if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7489,7 +7489,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
- } else if (dcb_info->nb_tcs == ETH_8_TCS) {
+ } else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7742,7 +7742,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
}
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = ixgbe_e_tag_filter_add(dev, l2_tunnel);
break;
default:
@@ -7774,7 +7774,7 @@ ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
return ret;
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = ixgbe_e_tag_filter_del(dev, l2_tunnel);
break;
default:
@@ -7871,12 +7871,12 @@ ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ixgbe_add_vxlan_port(hw, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -EINVAL;
break;
@@ -7908,11 +7908,11 @@ ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
ret = ixgbe_del_vxlan_port(hw, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
ret = -EINVAL;
break;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index a0ce18ca246b..3443154589e8 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -113,15 +113,15 @@
#define IXGBE_FDIR_NVGRE_TUNNEL_TYPE 0x0
#define IXGBE_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define IXGBE_VF_IRQ_ENABLE_MASK 3 /* vf irq enable mask */
#define IXGBE_VF_MAXMSIVECTOR 1
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 27a49bbce5e7..7894047829a8 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -90,9 +90,9 @@ static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
static uint32_t ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
uint32_t key);
static uint32_t atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc);
+ enum rte_eth_fdir_pballoc_type pballoc);
static uint32_t atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc);
+ enum rte_eth_fdir_pballoc_type pballoc);
static int fdir_write_perfect_filter_82599(struct ixgbe_hw *hw,
union ixgbe_atr_input *input, uint8_t queue,
uint32_t fdircmd, uint32_t fdirhash,
@@ -163,20 +163,20 @@ fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl)
* flexbytes matching field, and drop queue (only for perfect matching mode).
*/
static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf, uint32_t *fdirctrl)
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf, uint32_t *fdirctrl)
{
*fdirctrl = 0;
switch (conf->pballoc) {
- case RTE_FDIR_PBALLOC_64K:
+ case RTE_ETH_FDIR_PBALLOC_64K:
/* 8k - 1 signature filters */
*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_64K;
break;
- case RTE_FDIR_PBALLOC_128K:
+ case RTE_ETH_FDIR_PBALLOC_128K:
/* 16k - 1 signature filters */
*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_128K;
break;
- case RTE_FDIR_PBALLOC_256K:
+ case RTE_ETH_FDIR_PBALLOC_256K:
/* 32k - 1 signature filters */
*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_256K;
break;
@@ -807,13 +807,13 @@ ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
static uint32_t
atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
return ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
PERFECT_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
return ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
PERFECT_BUCKET_128KB_HASH_MASK;
@@ -850,15 +850,15 @@ ixgbe_fdir_check_cmd_complete(struct ixgbe_hw *hw, uint32_t *fdircmd)
*/
static uint32_t
atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
uint32_t bucket_hash, sig_hash;
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
bucket_hash = ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
SIG_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
bucket_hash = ixgbe_atr_compute_hash_82599(input,
IXGBE_ATR_BUCKET_HASH_KEY) &
SIG_BUCKET_128KB_HASH_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 511b612f7fe4..0557de6c1aa5 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -1259,7 +1259,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
return -rte_errno;
}
- filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+ filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
/**
* grp and e_cid_base are bit fields and only use 14 bits.
* e-tag id is taken as little endian by HW.
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
index e45c5501e6bf..944c9f23809e 100644
--- a/drivers/net/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -392,7 +392,7 @@ ixgbe_crypto_create_session(void *device,
aead_xform = &conf->crypto_xform->aead;
if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -400,7 +400,7 @@ ixgbe_crypto_create_session(void *device,
return -ENOTSUP;
}
} else {
- if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -633,11 +633,11 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
tx_offloads = dev->data->dev_conf.txmode.offloads;
/* sanity checks */
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
return -1;
}
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
return -1;
}
@@ -657,7 +657,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
- if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
if (reg != 0) {
@@ -665,7 +665,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
return -1;
}
}
- if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL,
IXGBE_SECTXCTRL_STORE_FORWARD);
reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index fbf2b17d160f..d03238b728ba 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -107,15 +107,15 @@ int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
memset(uta_info, 0, sizeof(struct ixgbe_uta_info));
hw->mac.mc_filter_type = 0;
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
nb_queue = 2;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
- } else if (vf_num >= ETH_16_POOLS) {
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
nb_queue = 4;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
} else {
nb_queue = 8;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
}
RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -266,15 +266,15 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
gpie |= IXGBE_GPIE_MSIX_MODE | IXGBE_GPIE_PBA_SUPPORT;
switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
gcr_ext |= IXGBE_GCR_EXT_VT_MODE_64;
gpie |= IXGBE_GPIE_VTMODE_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
gcr_ext |= IXGBE_GCR_EXT_VT_MODE_32;
gpie |= IXGBE_GPIE_VTMODE_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
gcr_ext |= IXGBE_GCR_EXT_VT_MODE_16;
gpie |= IXGBE_GPIE_VTMODE_16;
break;
@@ -604,11 +604,11 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
if (max_frame > IXGBE_ETH_MAX_LEN) {
dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
} else {
dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
}
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
@@ -684,29 +684,29 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
/* Notify VF of number of DCB traffic classes */
eth_conf = &dev->data->dev_conf;
switch (eth_conf->txmode.mq_mode) {
- case ETH_MQ_TX_NONE:
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_NONE:
+ case RTE_ETH_MQ_TX_DCB:
PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
", but its tx mode = %d\n", vf,
eth_conf->txmode.mq_mode);
return -1;
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
vmdq_dcb_tx_conf = ð_conf->tx_adv_conf.vmdq_dcb_tx_conf;
switch (vmdq_dcb_tx_conf->nb_queue_pools) {
- case ETH_16_POOLS:
- num_tcs = ETH_8_TCS;
+ case RTE_ETH_16_POOLS:
+ num_tcs = RTE_ETH_8_TCS;
break;
- case ETH_32_POOLS:
- num_tcs = ETH_4_TCS;
+ case RTE_ETH_32_POOLS:
+ num_tcs = RTE_ETH_4_TCS;
break;
default:
return -1;
}
break;
- /* ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
- case ETH_MQ_TX_VMDQ_ONLY:
+ /* RTE_ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
+ case RTE_ETH_MQ_TX_VMDQ_ONLY:
hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
vmvir = IXGBE_READ_REG(hw, IXGBE_VMVIR(vf));
vlana = vmvir & IXGBE_VMVIR_VLANA_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index c814a28cb49a..b5ee83d8edc8 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2591,26 +2591,26 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
if (hw->mac.type == ixgbe_mac_82599EB ||
hw->mac.type == ixgbe_mac_X540)
- tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
if (hw->mac.type == ixgbe_mac_X550 ||
hw->mac.type == ixgbe_mac_X550EM_x ||
hw->mac.type == ixgbe_mac_X550EM_a)
- tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
#endif
return tx_offload_capa;
}
@@ -2778,7 +2778,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->tx_deferred_start = tx_conf->tx_deferred_start;
#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SECURITY);
+ RTE_ETH_TX_OFFLOAD_SECURITY);
#endif
/*
@@ -3014,7 +3014,7 @@ ixgbe_get_rx_queue_offloads(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (hw->mac.type != ixgbe_mac_82598EB)
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
return offloads;
}
@@ -3025,20 +3025,20 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
uint64_t offloads;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- offloads = DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH;
+ offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (hw->mac.type == ixgbe_mac_82598EB)
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (ixgbe_is_vf(dev) == 0)
- offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
/*
* RSC is only supported by 82599 and x540 PF devices in a non-SR-IOV
@@ -3048,20 +3048,20 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_X540 ||
hw->mac.type == ixgbe_mac_X550) &&
!RTE_ETH_DEV_SRIOV(dev).active)
- offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
if (hw->mac.type == ixgbe_mac_82599EB ||
hw->mac.type == ixgbe_mac_X540)
- offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
if (hw->mac.type == ixgbe_mac_X550 ||
hw->mac.type == ixgbe_mac_X550EM_x ||
hw->mac.type == ixgbe_mac_X550EM_a)
- offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- offloads |= DEV_RX_OFFLOAD_SECURITY;
+ offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
#endif
return offloads;
@@ -3116,7 +3116,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -3520,23 +3520,23 @@ ixgbe_hw_rss_hash_set(struct ixgbe_hw *hw, struct rte_eth_rss_conf *rss_conf)
/* Set configured hashing protocols in MRQC register */
rss_hf = rss_conf->rss_hf;
mrqc = IXGBE_MRQC_RSSEN; /* Enable RSS */
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_TCP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6;
- if (rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_TCP;
- if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_UDP;
- if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP;
IXGBE_WRITE_REG(hw, mrqc_reg, mrqc);
}
@@ -3618,23 +3618,23 @@ ixgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
}
rss_hf = 0;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP)
- rss_hf |= ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
rss_conf->rss_hf = rss_hf;
return 0;
}
@@ -3710,12 +3710,12 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
num_pools = cfg->nb_queue_pools;
/* Check we have a valid number of pools */
- if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+ if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
ixgbe_rss_disable(dev);
return;
}
/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
- nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+ nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
/*
* RXPBSIZE
@@ -3740,7 +3740,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
}
/* zero alloc all unused TCs */
- for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
uint32_t rxpbsize = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i));
rxpbsize &= (~(0x3FF << IXGBE_RXPBSIZE_SHIFT));
@@ -3749,7 +3749,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
}
/* MRQC: enable vmdq and dcb */
- mrqc = (num_pools == ETH_16_POOLS) ?
+ mrqc = (num_pools == RTE_ETH_16_POOLS) ?
IXGBE_MRQC_VMDQRT8TCEN : IXGBE_MRQC_VMDQRT4TCEN;
IXGBE_WRITE_REG(hw, IXGBE_MRQC, mrqc);
@@ -3765,7 +3765,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
/* RTRUP2TC: mapping user priorities to traffic classes (TCs) */
queue_mapping = 0;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
/*
* mapping is done with 3 bits per priority,
* so shift by i*3 each time
@@ -3789,7 +3789,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
/* VFRE: pool enabling for receive - 16 or 32 */
IXGBE_WRITE_REG(hw, IXGBE_VFRE(0),
- num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
/*
* MPSAR - allow pools to read specific mac addresses
@@ -3871,7 +3871,7 @@ ixgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
if (hw->mac.type != ixgbe_mac_82598EB)
/*PF VF Transmit Enable*/
IXGBE_WRITE_REG(hw, IXGBE_VFTE(0),
- vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
/*Configure general DCB TX parameters*/
ixgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3887,12 +3887,12 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
- if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3902,7 +3902,7 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3920,12 +3920,12 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
- if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3935,7 +3935,7 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3962,7 +3962,7 @@ ixgbe_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3989,7 +3989,7 @@ ixgbe_dcb_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -4158,7 +4158,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
dcb_config->vt_mode = true;
if (hw->mac.type != ixgbe_mac_82598EB) {
config_dcb_rx = DCB_RX_CONFIG;
@@ -4171,8 +4171,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
ixgbe_vmdq_dcb_configure(dev);
}
break;
- case ETH_MQ_RX_DCB:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_DCB_RSS:
dcb_config->vt_mode = false;
config_dcb_rx = DCB_RX_CONFIG;
/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -4185,7 +4185,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
break;
}
switch (dev->data->dev_conf.txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
dcb_config->vt_mode = true;
config_dcb_tx = DCB_TX_CONFIG;
/* get DCB and VT TX configuration parameters
@@ -4196,7 +4196,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
ixgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
break;
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_DCB:
dcb_config->vt_mode = false;
config_dcb_tx = DCB_TX_CONFIG;
/*get DCB TX configuration parameters from rte_eth_conf*/
@@ -4212,15 +4212,15 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
nb_tcs = dcb_config->num_tcs.pfc_tcs;
/* Unpack map */
ixgbe_dcb_unpack_map_cee(dcb_config, IXGBE_DCB_RX_CONFIG, map);
- if (nb_tcs == ETH_4_TCS) {
+ if (nb_tcs == RTE_ETH_4_TCS) {
/* Avoid un-configured priority mapping to TC0 */
uint8_t j = 4;
uint8_t mask = 0xFF;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
mask = (uint8_t)(mask & (~(1 << map[i])));
for (i = 0; mask && (i < IXGBE_DCB_MAX_TRAFFIC_CLASS); i++) {
- if ((mask & 0x1) && (j < ETH_DCB_NUM_USER_PRIORITIES))
+ if ((mask & 0x1) && (j < RTE_ETH_DCB_NUM_USER_PRIORITIES))
map[j++] = i;
mask >>= 1;
}
@@ -4270,7 +4270,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
}
/* zero alloc all unused TCs */
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
}
}
@@ -4286,7 +4286,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), txpbthresh);
}
/* Clear unused TCs, if any, to zero buffer size*/
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), 0);
IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), 0);
}
@@ -4322,7 +4322,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
ixgbe_dcb_config_tc_stats_82599(hw, dcb_config);
/* Check if the PFC is supported */
- if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
for (i = 0; i < nb_tcs; i++) {
/*
@@ -4336,7 +4336,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
tc->pfc = ixgbe_dcb_pfc_enabled;
}
ixgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
- if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+ if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
pfc_en &= 0x0F;
ret = ixgbe_dcb_config_pfc(hw, pfc_en, map);
}
@@ -4357,12 +4357,12 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* check support mq_mode for DCB */
- if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
- (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
- (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS))
+ if ((dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB) &&
+ (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB) &&
+ (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS))
return;
- if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+ if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
return;
/** Configure DCB hardware **/
@@ -4418,7 +4418,7 @@ ixgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
/* VFRE: pool enabling for receive - 64 */
IXGBE_WRITE_REG(hw, IXGBE_VFRE(0), UINT32_MAX);
- if (num_pools == ETH_64_POOLS)
+ if (num_pools == RTE_ETH_64_POOLS)
IXGBE_WRITE_REG(hw, IXGBE_VFRE(1), UINT32_MAX);
/*
@@ -4539,11 +4539,11 @@ ixgbe_config_vf_rss(struct rte_eth_dev *dev)
mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC);
mrqc &= ~IXGBE_MRQC_MRQE_MASK;
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mrqc |= IXGBE_MRQC_VMDQRSS64EN;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mrqc |= IXGBE_MRQC_VMDQRSS32EN;
break;
@@ -4564,17 +4564,17 @@ ixgbe_config_vf_default(struct rte_eth_dev *dev)
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
IXGBE_WRITE_REG(hw, IXGBE_MRQC,
IXGBE_MRQC_VMDQEN);
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
IXGBE_WRITE_REG(hw, IXGBE_MRQC,
IXGBE_MRQC_VMDQRT4TCEN);
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
IXGBE_WRITE_REG(hw, IXGBE_MRQC,
IXGBE_MRQC_VMDQRT8TCEN);
break;
@@ -4601,21 +4601,21 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* any DCB/RSS w/o VMDq multi-queue setting
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_DCB_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
ixgbe_rss_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
ixgbe_vmdq_dcb_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
ixgbe_vmdq_rx_hw_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
default:
/* if mq_mode is none, disable rss mode.*/
ixgbe_rss_disable(dev);
@@ -4626,18 +4626,18 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* Support RSS together with SRIOV.
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
ixgbe_config_vf_rss(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
- case ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_DCB:
/* In SRIOV, the configuration is the same as VMDq case */
ixgbe_vmdq_dcb_configure(dev);
break;
/* DCB/RSS together with SRIOV is not supported */
- case ETH_MQ_RX_VMDQ_DCB_RSS:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
PMD_INIT_LOG(ERR,
"Could not support DCB/RSS with VMDq & SRIOV");
return -1;
@@ -4671,7 +4671,7 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV inactive scheme
* any DCB w/o VMDq multi-queue setting
*/
- if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+ if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
ixgbe_vmdq_tx_hw_configure(hw);
else {
mtqc = IXGBE_MTQC_64Q_1PB;
@@ -4684,13 +4684,13 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV active scheme
* FIXME if support DCB together with VMDq & SRIOV
*/
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_64VF;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_32VF;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_RT_ENA |
IXGBE_MTQC_8TC_8TQ;
break;
@@ -4898,7 +4898,7 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
rxq->rx_using_sse = rx_using_sse;
#ifdef RTE_LIB_SECURITY
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SECURITY);
+ RTE_ETH_RX_OFFLOAD_SECURITY);
#endif
}
}
@@ -4926,10 +4926,10 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
/* Sanity check */
dev->dev_ops->dev_infos_get(dev, &dev_info);
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
rsc_capable = true;
- if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
"support it");
return -EINVAL;
@@ -4937,8 +4937,8 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
/* RSC global configuration (chapter 4.6.7.2.1 of 82599 Spec) */
- if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
- (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+ (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
/*
* According to chapter of 4.6.7.2.1 of the Spec Rev.
* 3.0 RSC configuration requires HW CRC stripping being
@@ -4952,7 +4952,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
/* RFCTL configuration */
rfctl = IXGBE_READ_REG(hw, IXGBE_RFCTL);
- if ((rsc_capable) && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if ((rsc_capable) && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
rfctl &= ~IXGBE_RFCTL_RSC_DIS;
else
rfctl |= IXGBE_RFCTL_RSC_DIS;
@@ -4961,7 +4961,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
IXGBE_WRITE_REG(hw, IXGBE_RFCTL, rfctl);
/* If LRO hasn't been requested - we are done here. */
- if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
return 0;
/* Set RDRXCTL.RSCACKC bit */
@@ -5082,7 +5082,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
* Configure CRC stripping, if any.
*/
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
hlreg0 &= ~IXGBE_HLREG0_RXCRCSTRP;
else
hlreg0 |= IXGBE_HLREG0_RXCRCSTRP;
@@ -5090,7 +5090,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
@@ -5119,7 +5119,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Setup RX queues */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
@@ -5128,7 +5128,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -5171,11 +5171,11 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
2 * IXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
- if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
dev->data->scattered_rx = 1;
/*
@@ -5190,7 +5190,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
*/
rxcsum = IXGBE_READ_REG(hw, IXGBE_RXCSUM);
rxcsum |= IXGBE_RXCSUM_PCSD;
- if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= IXGBE_RXCSUM_IPPCSE;
else
rxcsum &= ~IXGBE_RXCSUM_IPPCSE;
@@ -5200,7 +5200,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
if (hw->mac.type == ixgbe_mac_82599EB ||
hw->mac.type == ixgbe_mac_X540) {
rdrxctl = IXGBE_READ_REG(hw, IXGBE_RDRXCTL);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rdrxctl &= ~IXGBE_RDRXCTL_CRCSTRIP;
else
rdrxctl |= IXGBE_RDRXCTL_CRCSTRIP;
@@ -5406,9 +5406,9 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
#ifdef RTE_LIB_SECURITY
if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SECURITY) ||
+ RTE_ETH_RX_OFFLOAD_SECURITY) ||
(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SECURITY)) {
+ RTE_ETH_TX_OFFLOAD_SECURITY)) {
ret = ixgbe_crypto_enable_ipsec(dev);
if (ret != 0) {
PMD_DRV_LOG(ERR,
@@ -5696,7 +5696,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Setup RX queues */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
@@ -5745,7 +5745,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
buf_size = (uint16_t) ((srrctl & IXGBE_SRRCTL_BSIZEPKT_MASK) <<
IXGBE_SRRCTL_BSIZEPKT_SHIFT);
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
(rxmode->max_rx_pkt_len +
2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
@@ -5754,8 +5754,8 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
dev->data->scattered_rx = 1;
}
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
/* Set RQPL for VF RSS according to max Rx queue */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 476ef62cfda2..220efffe4d08 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -133,7 +133,7 @@ struct ixgbe_rx_queue {
uint8_t rx_udp_csum_zero_err;
/** flags to set in mbuf when a vlan is detected. */
uint64_t vlan_flags;
- uint64_t offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
struct rte_mbuf fake_mbuf;
/** hold packets to return to application */
@@ -226,7 +226,7 @@ struct ixgbe_tx_queue {
uint8_t pthresh; /**< Prefetch threshold register. */
uint8_t hthresh; /**< Host threshold register. */
uint8_t wthresh; /**< Write-back threshold reg. */
- uint64_t offloads; /**< Tx offload flags of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
uint32_t ctx_curr; /**< Hardware context states. */
/** Hardware context0 history. */
struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index adba855ca30f..714707941537 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -278,7 +278,7 @@ static inline int
ixgbe_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
{
#ifndef RTE_LIBRTE_IEEE1588
- struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
/* no fdir support */
if (fconf->mode != RTE_FDIR_MODE_NONE)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index a8407e742e6d..c2ab3131f22e 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -119,14 +119,14 @@ ixgbe_tc_nb_get(struct rte_eth_dev *dev)
uint8_t nb_tcs = 0;
eth_conf = &dev->data->dev_conf;
- if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
- } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ } else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
- ETH_32_POOLS)
- nb_tcs = ETH_4_TCS;
+ RTE_ETH_32_POOLS)
+ nb_tcs = RTE_ETH_4_TCS;
else
- nb_tcs = ETH_8_TCS;
+ nb_tcs = RTE_ETH_8_TCS;
} else {
nb_tcs = 1;
}
@@ -375,10 +375,10 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
if (vf_num) {
/* no DCB */
if (nb_tcs == 1) {
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
*nb = 2;
*base = vf_num * 2;
- } else if (vf_num >= ETH_16_POOLS) {
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
*nb = 4;
*base = vf_num * 4;
} else {
@@ -392,7 +392,7 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
}
} else {
/* VT off */
- if (nb_tcs == ETH_8_TCS) {
+ if (nb_tcs == RTE_ETH_8_TCS) {
switch (tc_node_no) {
case 0:
*base = 0;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index d5b636a19408..536e33010703 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -58,20 +58,20 @@ ixgbe_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
/**< Maximum number of MAC addresses. */
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
/**< Device RX offload capabilities. */
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/**< Device TX offload capabilities. */
dev_info->speed_capa =
representor->pf_ethdev->data->dev_link.link_speed;
- /**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+ /**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
dev_info->switch_info.name =
representor->pf_ethdev->device->name;
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c
index cf089cd9aee5..9729f8575f53 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.c
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c
@@ -303,10 +303,10 @@ rte_pmd_ixgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
*/
if (hw->mac.type == ixgbe_mac_82598EB)
queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
- ETH_16_POOLS;
+ RTE_ETH_16_POOLS;
else
queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
- ETH_64_POOLS;
+ RTE_ETH_64_POOLS;
for (q = 0; q < queues_per_pool; q++)
(*dev->dev_ops->vlan_strip_queue_set)(dev,
@@ -736,14 +736,14 @@ rte_pmd_ixgbe_set_tc_bw_alloc(uint16_t port,
bw_conf = IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
eth_conf = &dev->data->dev_conf;
- if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
- } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ } else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
- ETH_32_POOLS)
- nb_tcs = ETH_4_TCS;
+ RTE_ETH_32_POOLS)
+ nb_tcs = RTE_ETH_4_TCS;
else
- nb_tcs = ETH_8_TCS;
+ nb_tcs = RTE_ETH_8_TCS;
} else {
nb_tcs = 1;
}
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.h b/drivers/net/ixgbe/rte_pmd_ixgbe.h
index 90fc8160b1f8..bad6691648a1 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.h
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.h
@@ -285,8 +285,8 @@ int rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an,
* @param rx_mask
* The RX mode mask, which is one or more of accepting Untagged Packets,
* packets that match the PFUTA table, Broadcast and Multicast Promiscuous.
-* ETH_VMDQ_ACCEPT_UNTAG,ETH_VMDQ_ACCEPT_HASH_UC,
-* ETH_VMDQ_ACCEPT_BROADCAST and ETH_VMDQ_ACCEPT_MULTICAST will be used
+* RTE_ETH_VMDQ_ACCEPT_UNTAG,RTE_ETH_VMDQ_ACCEPT_HASH_UC,
+* RTE_ETH_VMDQ_ACCEPT_BROADCAST and RTE_ETH_VMDQ_ACCEPT_MULTICAST will be used
* in rx_mode.
* @param on
* 1 - Enable a VF RX mode.
diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c
index 871d11c4133d..29060ca76f93 100644
--- a/drivers/net/kni/rte_eth_kni.c
+++ b/drivers/net/kni/rte_eth_kni.c
@@ -61,10 +61,10 @@ struct pmd_internals {
};
static const struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
static int is_kni_initialized;
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index b72060a4499b..118170670fbb 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -384,15 +384,15 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
- devinfo->speed_capa = ETH_LINK_SPEED_10G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
break;
/* CN23xx 25G cards */
case PCI_SUBSYS_DEV_ID_CN2350_225:
case PCI_SUBSYS_DEV_ID_CN2360_225:
- devinfo->speed_capa = ETH_LINK_SPEED_25G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
break;
default:
- devinfo->speed_capa = ETH_LINK_SPEED_10G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
lio_dev_err(lio_dev,
"Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
return -EINVAL;
@@ -406,27 +406,27 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
devinfo->max_mac_addrs = 1;
- devinfo->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_RSS_HASH);
- devinfo->tx_offload_capa = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM);
+ devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
+ devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
devinfo->rx_desc_lim = lio_rx_desc_lim;
devinfo->tx_desc_lim = lio_tx_desc_lim;
devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
- devinfo->flow_type_rss_offloads = (ETH_RSS_IPV4 |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_IPV6 |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_EX |
- ETH_RSS_IPV6_TCP_EX);
+ devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_EX |
+ RTE_ETH_RSS_IPV6_TCP_EX);
return 0;
}
@@ -483,10 +483,10 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (frame_len > LIO_ETH_MAX_LEN)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_len;
eth_dev->data->mtu = mtu;
@@ -616,17 +616,17 @@ lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
if (rss_state->ip)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (rss_state->tcp_hash)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (rss_state->ipv6)
- rss_hf |= ETH_RSS_IPV6;
+ rss_hf |= RTE_ETH_RSS_IPV6;
if (rss_state->ipv6_tcp_hash)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (rss_state->ipv6_ex)
- rss_hf |= ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (rss_state->ipv6_tcp_ex_hash)
- rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
rss_conf->rss_hf = rss_hf;
@@ -694,42 +694,42 @@ lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
if (rss_state->hash_disable)
return -EINVAL;
- if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
hashinfo |= LIO_RSS_HASH_IPV4;
rss_state->ip = 1;
} else {
rss_state->ip = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
hashinfo |= LIO_RSS_HASH_TCP_IPV4;
rss_state->tcp_hash = 1;
} else {
rss_state->tcp_hash = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_IPV6) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
hashinfo |= LIO_RSS_HASH_IPV6;
rss_state->ipv6 = 1;
} else {
rss_state->ipv6 = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
hashinfo |= LIO_RSS_HASH_TCP_IPV6;
rss_state->ipv6_tcp_hash = 1;
} else {
rss_state->ipv6_tcp_hash = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_IPV6_EX) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
hashinfo |= LIO_RSS_HASH_IPV6_EX;
rss_state->ipv6_ex = 1;
} else {
rss_state->ipv6_ex = 0;
}
- if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX) {
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
rss_state->ipv6_tcp_ex_hash = 1;
} else {
@@ -778,7 +778,7 @@ lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
if (udp_tnl == NULL)
return -EINVAL;
- if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+ if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
lio_dev_err(lio_dev, "Unsupported tunnel type\n");
return -1;
}
@@ -835,7 +835,7 @@ lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
if (udp_tnl == NULL)
return -EINVAL;
- if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+ if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
lio_dev_err(lio_dev, "Unsupported tunnel type\n");
return -1;
}
@@ -933,10 +933,10 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
/* Initialize */
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
/* Return what we found */
if (lio_dev->linfo.link.s.link_up == 0) {
@@ -944,18 +944,18 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
return rte_eth_linkstatus_set(eth_dev, &link);
}
- link.link_status = ETH_LINK_UP; /* Interface is up */
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (lio_dev->linfo.link.s.speed) {
case LIO_LINK_SPEED_10000:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case LIO_LINK_SPEED_25000:
- link.link_speed = ETH_SPEED_NUM_25G;
+ link.link_speed = RTE_ETH_SPEED_NUM_25G;
break;
default:
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
}
return rte_eth_linkstatus_set(eth_dev, &link);
@@ -1124,10 +1124,10 @@ lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
struct rte_eth_rss_conf rss_conf;
switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
lio_dev_rss_configure(eth_dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/* if mq_mode is none, disable rss mode. */
default:
memset(&rss_conf, 0, sizeof(rss_conf));
@@ -1509,7 +1509,7 @@ lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
}
lio_dev->linfo.link.s.link_up = 1;
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -1530,11 +1530,11 @@ lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
}
lio_dev->linfo.link.s.link_up = 0;
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
lio_dev->linfo.link.s.link_up = 1;
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
lio_dev_err(lio_dev, "Unable to set Link Down\n");
return -1;
}
@@ -1746,9 +1746,9 @@ lio_dev_configure(struct rte_eth_dev *eth_dev)
PMD_INIT_FUNC_TRACE();
- if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_RSS_HASH;
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Inform firmware about change in number of queues to use.
* Disable IO queues and reset registers for re-configuration.
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index f58ff4c0cb77..a117a05228fc 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -525,7 +525,7 @@ memif_disconnect(struct rte_eth_dev *dev)
int i;
int ret;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTED;
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index de6becd45e3e..ea66f5bfd452 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -55,10 +55,10 @@ static const char * const valid_arguments[] = {
};
static const struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_AUTONEG
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_AUTONEG
};
#define MEMIF_MP_SEND_REGION "memif_mp_send_region"
@@ -1216,7 +1216,7 @@ memif_connect(struct rte_eth_dev *dev)
pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
pmd->flags |= ETH_MEMIF_FLAG_CONNECTED;
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
}
MIF_LOG(INFO, "Connected.");
return 0;
@@ -1367,10 +1367,10 @@ memif_link_update(struct rte_eth_dev *dev,
if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
proc_private = dev->process_private;
- if (dev->data->dev_link.link_status == ETH_LINK_UP &&
+ if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP &&
proc_private->regions_num == 0) {
memif_mp_request_regions(dev);
- } else if (dev->data->dev_link.link_status == ETH_LINK_DOWN &&
+ } else if (dev->data->dev_link.link_status == RTE_ETH_LINK_DOWN &&
proc_private->regions_num > 0) {
memif_free_regions(dev);
}
diff --git a/drivers/net/mlx4/mlx4_ethdev.c b/drivers/net/mlx4/mlx4_ethdev.c
index 783ff94dce8d..d606ec8ca76d 100644
--- a/drivers/net/mlx4/mlx4_ethdev.c
+++ b/drivers/net/mlx4/mlx4_ethdev.c
@@ -657,11 +657,11 @@ mlx4_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
info->if_index = priv->if_index;
info->hash_key_size = MLX4_RSS_HASH_KEY_SIZE;
info->speed_capa =
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_20G |
- ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_56G;
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_20G |
+ RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_56G;
info->flow_type_rss_offloads = mlx4_conv_rss_types(priv, 0, 1);
return 0;
@@ -821,13 +821,13 @@ mlx4_link_update(struct rte_eth_dev *dev, int wait_to_complete)
}
link_speed = ethtool_cmd_speed(&edata);
if (link_speed == -1)
- dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
else
dev_link.link_speed = link_speed;
dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
- ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
dev->data->dev_link = dev_link;
return 0;
}
@@ -863,13 +863,13 @@ mlx4_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
fc_conf->autoneg = ethpause.autoneg;
if (ethpause.rx_pause && ethpause.tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (ethpause.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (ethpause.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
ret = 0;
out:
MLX4_ASSERT(ret >= 0);
@@ -899,13 +899,13 @@ mlx4_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
ifr.ifr_data = (void *)ðpause;
ethpause.autoneg = fc_conf->autoneg;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_RX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
ethpause.rx_pause = 1;
else
ethpause.rx_pause = 0;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_TX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
ethpause.tx_pause = 1;
else
ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 71ea91b3fb82..2e1b6c87e983 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -109,21 +109,21 @@ mlx4_conv_rss_types(struct mlx4_priv *priv, uint64_t types, int verbs_to_dpdk)
};
static const uint64_t dpdk[] = {
[INNER] = 0,
- [IPV4] = ETH_RSS_IPV4,
- [IPV4_1] = ETH_RSS_FRAG_IPV4,
- [IPV4_2] = ETH_RSS_NONFRAG_IPV4_OTHER,
- [IPV6] = ETH_RSS_IPV6,
- [IPV6_1] = ETH_RSS_FRAG_IPV6,
- [IPV6_2] = ETH_RSS_NONFRAG_IPV6_OTHER,
- [IPV6_3] = ETH_RSS_IPV6_EX,
+ [IPV4] = RTE_ETH_RSS_IPV4,
+ [IPV4_1] = RTE_ETH_RSS_FRAG_IPV4,
+ [IPV4_2] = RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+ [IPV6] = RTE_ETH_RSS_IPV6,
+ [IPV6_1] = RTE_ETH_RSS_FRAG_IPV6,
+ [IPV6_2] = RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+ [IPV6_3] = RTE_ETH_RSS_IPV6_EX,
[TCP] = 0,
[UDP] = 0,
- [IPV4_TCP] = ETH_RSS_NONFRAG_IPV4_TCP,
- [IPV4_UDP] = ETH_RSS_NONFRAG_IPV4_UDP,
- [IPV6_TCP] = ETH_RSS_NONFRAG_IPV6_TCP,
- [IPV6_TCP_1] = ETH_RSS_IPV6_TCP_EX,
- [IPV6_UDP] = ETH_RSS_NONFRAG_IPV6_UDP,
- [IPV6_UDP_1] = ETH_RSS_IPV6_UDP_EX,
+ [IPV4_TCP] = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+ [IPV4_UDP] = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+ [IPV6_TCP] = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+ [IPV6_TCP_1] = RTE_ETH_RSS_IPV6_TCP_EX,
+ [IPV6_UDP] = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ [IPV6_UDP_1] = RTE_ETH_RSS_IPV6_UDP_EX,
};
static const uint64_t verbs[RTE_DIM(dpdk)] = {
[INNER] = IBV_RX_HASH_INNER,
@@ -1283,7 +1283,7 @@ mlx4_flow_internal_next_vlan(struct mlx4_priv *priv, uint16_t vlan)
* - MAC flow rules are generated from @p dev->data->mac_addrs
* (@p priv->mac array).
* - An additional flow rule for Ethernet broadcasts is also generated.
- * - All these are per-VLAN if @p DEV_RX_OFFLOAD_VLAN_FILTER
+ * - All these are per-VLAN if @p RTE_ETH_RX_OFFLOAD_VLAN_FILTER
* is enabled and VLAN filters are configured.
*
* @param priv
@@ -1358,7 +1358,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
struct rte_ether_addr *rule_mac = ð_spec.dst;
rte_be16_t *rule_vlan =
(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER) &&
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
!ETH_DEV(priv)->data->promiscuous ?
&vlan_spec.tci :
NULL;
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index d56009c41845..2aab0f60a7b5 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -118,7 +118,7 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
static void
mlx4_link_status_alarm(struct mlx4_priv *priv)
{
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
MLX4_ASSERT(priv->intr_alarm == 1);
@@ -183,7 +183,7 @@ mlx4_interrupt_handler(struct mlx4_priv *priv)
};
uint32_t caught[RTE_DIM(type)] = { 0 };
struct ibv_async_event event;
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
unsigned int i;
@@ -280,7 +280,7 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
int
mlx4_intr_install(struct mlx4_priv *priv)
{
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
int rc;
@@ -386,7 +386,7 @@ mlx4_rx_intr_enable(struct rte_eth_dev *dev, uint16_t idx)
int
mlx4_rxq_intr_enable(struct mlx4_priv *priv)
{
- const struct rte_intr_conf *const intr_conf =
+ const struct rte_eth_intr_conf *const intr_conf =
Ð_DEV(priv)->data->dev_conf.intr_conf;
if (intr_conf->rxq && mlx4_rx_intr_vec_enable(priv) < 0)
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 978cbb8201ea..9977c761880a 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -682,13 +682,13 @@ mlx4_rxq_detach(struct rxq *rxq)
uint64_t
mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
{
- uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_RSS_HASH;
+ uint64_t offloads = RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (priv->hw_csum)
- offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
return offloads;
}
@@ -704,7 +704,7 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
uint64_t
mlx4_get_rx_port_offloads(struct mlx4_priv *priv)
{
- uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+ uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
(void)priv;
return offloads;
@@ -785,7 +785,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
}
/* By default, FCS (CRC) is stripped by hardware. */
crc_present = 0;
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
if (priv->hw_fcs_strip) {
crc_present = 1;
} else {
@@ -816,9 +816,9 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
.elts = elts,
/* Toggle Rx checksum offload if hardware supports it. */
.csum = priv->hw_csum &&
- (offloads & DEV_RX_OFFLOAD_CHECKSUM),
+ (offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
.csum_l2tun = priv->hw_csum_l2tun &&
- (offloads & DEV_RX_OFFLOAD_CHECKSUM),
+ (offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
.crc_present = crc_present,
.l2tun_offload = priv->hw_csum_l2tun,
.stats = {
@@ -831,7 +831,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
(mb_len - RTE_PKTMBUF_HEADROOM)) {
;
- } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+ } else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
uint32_t size =
RTE_PKTMBUF_HEADROOM +
dev->data->dev_conf.rxmode.max_rx_pkt_len;
diff --git a/drivers/net/mlx4/mlx4_txq.c b/drivers/net/mlx4/mlx4_txq.c
index 2df26842fbe4..19feec5e5202 100644
--- a/drivers/net/mlx4/mlx4_txq.c
+++ b/drivers/net/mlx4/mlx4_txq.c
@@ -273,20 +273,20 @@ mlx4_txq_fill_dv_obj_info(struct txq *txq, struct mlx4dv_obj *mlxdv)
uint64_t
mlx4_get_tx_port_offloads(struct mlx4_priv *priv)
{
- uint64_t offloads = DEV_TX_OFFLOAD_MULTI_SEGS;
+ uint64_t offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
if (priv->hw_csum) {
- offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM);
+ offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
}
if (priv->tso)
- offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (priv->hw_csum_l2tun) {
- offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (priv->tso)
- offloads |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO);
+ offloads |= (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
}
return offloads;
}
@@ -394,12 +394,12 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
.elts_comp_cd_init =
RTE_MIN(MLX4_PMD_TX_PER_COMP_REQ, desc / 4),
.csum = priv->hw_csum &&
- (offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM)),
+ (offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)),
.csum_l2tun = priv->hw_csum_l2tun &&
(offloads &
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM),
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM),
/* Enable Tx loopback for VF devices. */
.lb = !!priv->vf,
.bounce_buf = bounce_buf,
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index f34133e2c641..79e27fe2d668 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -439,24 +439,24 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev,
}
link_speed = ethtool_cmd_speed(&edata);
if (link_speed == -1)
- dev_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ dev_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
else
dev_link.link_speed = link_speed;
priv->link_speed_capa = 0;
if (edata.supported & (SUPPORTED_1000baseT_Full |
SUPPORTED_1000baseKX_Full))
- priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (edata.supported & SUPPORTED_10000baseKR_Full)
- priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (edata.supported & (SUPPORTED_40000baseKR4_Full |
SUPPORTED_40000baseCR4_Full |
SUPPORTED_40000baseSR4_Full |
SUPPORTED_40000baseLR4_Full))
- priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
- ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
*link = dev_link;
return 0;
}
@@ -545,45 +545,45 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
return ret;
}
dev_link.link_speed = (ecmd->speed == UINT32_MAX) ?
- ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
+ RTE_ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
sc = ecmd->link_mode_masks[0] |
((uint64_t)ecmd->link_mode_masks[1] << 32);
priv->link_speed_capa = 0;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseT_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseKX_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKR_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseR_FEC_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_20G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_20G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseCR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseSR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseLR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_56G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_56G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseCR_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseKR_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseSR_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_25G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_50G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_100G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_100G;
if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
sc = ecmd->link_mode_masks[2] |
((uint64_t)ecmd->link_mode_masks[3] << 32);
@@ -591,11 +591,11 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
MLX5_BITSHIFT
(ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT) |
MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseDR4_Full_BIT)))
- priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+ priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
dev_link.link_duplex = ((ecmd->duplex == DUPLEX_HALF) ?
- ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+ RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ETH_LINK_SPEED_FIXED);
+ RTE_ETH_LINK_SPEED_FIXED);
*link = dev_link;
return 0;
}
@@ -677,13 +677,13 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
fc_conf->autoneg = ethpause.autoneg;
if (ethpause.rx_pause && ethpause.tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (ethpause.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (ethpause.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -709,14 +709,14 @@ mlx5_dev_set_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
ifr.ifr_data = (void *)ðpause;
ethpause.autoneg = fc_conf->autoneg;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_RX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
ethpause.rx_pause = 1;
else
ethpause.rx_pause = 0;
- if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
- (fc_conf->mode & RTE_FC_TX_PAUSE))
+ if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
ethpause.tx_pause = 1;
else
ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 5f8766aa481e..c40cda8fcaf9 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1343,8 +1343,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
* Remove this check once DPDK supports larger/variable
* indirection tables.
*/
- if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
- config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+ if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+ config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
config->ind_table_max_size);
config->hw_vlan_strip = !!(sh->device_attr.raw_packet_caps &
@@ -1627,7 +1627,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
/*
* If HW has bug working with tunnel packet decapsulation and
* scatter FCS, and decapsulation is needed, clear the hw_fcs_strip
- * bit. Then DEV_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
+ * bit. Then RTE_ETH_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
*/
if (config->hca_attr.scatter_fcs_w_decap_disable && config->decap_en)
config->hw_fcs_strip = 0;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index f84e061fe719..ff1c8e17460a 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1463,10 +1463,10 @@ mlx5_udp_tunnel_port_add(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_udp_tunnel *udp_tunnel)
{
MLX5_ASSERT(udp_tunnel != NULL);
- if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN &&
+ if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN &&
udp_tunnel->udp_port == 4789)
return 0;
- if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN_GPE &&
+ if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN_GPE &&
udp_tunnel->udp_port == 4790)
return 0;
return -ENOTSUP;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index e02714e23196..9588dff05180 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1226,7 +1226,7 @@ TAILQ_HEAD(mlx5_legacy_flow_meters, mlx5_legacy_flow_meter);
struct mlx5_flow_rss_desc {
uint32_t level;
uint32_t queue_num; /**< Number of entries in @p queue. */
- uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
uint64_t hash_fields; /* Verbs Hash fields. */
uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
uint32_t key_len; /**< RSS hash key len. */
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index fe86bb40d351..12ddf4c7ff28 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -90,11 +90,11 @@
#define MLX5_VPMD_DESCS_PER_LOOP 4
/* Mask of RSS on source only or destination only. */
-#define MLX5_RSS_SRC_DST_ONLY (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | \
- ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define MLX5_RSS_SRC_DST_ONLY (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
/* Supported RSS */
-#define MLX5_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP | \
+#define MLX5_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP | \
MLX5_RSS_SRC_DST_ONLY))
/* Timeout in seconds to get a valid link status. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 82e2284d9866..f2b78c3cc69e 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -91,7 +91,7 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
}
if ((dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
+ RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
rte_mbuf_dyn_tx_timestamp_register(NULL, NULL) != 0) {
DRV_LOG(ERR, "port %u cannot register Tx timestamp field/flag",
dev->data->port_id);
@@ -225,8 +225,8 @@ mlx5_set_default_params(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
info->default_txportconf.ring_size = 256;
info->default_rxportconf.burst_size = MLX5_RX_DEFAULT_BURST;
info->default_txportconf.burst_size = MLX5_TX_DEFAULT_BURST;
- if ((priv->link_speed_capa & ETH_LINK_SPEED_200G) |
- (priv->link_speed_capa & ETH_LINK_SPEED_100G)) {
+ if ((priv->link_speed_capa & RTE_ETH_LINK_SPEED_200G) |
+ (priv->link_speed_capa & RTE_ETH_LINK_SPEED_100G)) {
info->default_rxportconf.nb_queues = 16;
info->default_txportconf.nb_queues = 16;
if (dev->data->nb_rx_queues > 2 ||
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 4762fa0f5f88..7048fff3883e 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -98,7 +98,7 @@ struct mlx5_flow_expand_node {
uint64_t rss_types;
/**<
* RSS types bit-field associated with this node
- * (see ETH_RSS_* definitions).
+ * (see RTE_ETH_RSS_* definitions).
*/
uint64_t node_flags;
/**<
@@ -272,7 +272,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
* @param[in] pattern
* User flow pattern.
* @param[in] types
- * RSS types to expand (see ETH_RSS_* definitions).
+ * RSS types to expand (see RTE_ETH_RSS_* definitions).
* @param[in] graph
* Input graph to expand @p pattern according to @p types.
* @param[in] graph_root_index
@@ -522,8 +522,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_IPV4,
MLX5_EXPANSION_IPV6),
.type = RTE_FLOW_ITEM_TYPE_IPV4,
- .rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
},
[MLX5_EXPANSION_OUTER_IPV4_UDP] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -531,11 +531,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_MPLS,
MLX5_EXPANSION_GTP),
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
},
[MLX5_EXPANSION_OUTER_IPV4_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
},
[MLX5_EXPANSION_OUTER_IPV6] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT
@@ -546,8 +546,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_GRE,
MLX5_EXPANSION_NVGRE),
.type = RTE_FLOW_ITEM_TYPE_IPV6,
- .rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
},
[MLX5_EXPANSION_OUTER_IPV6_UDP] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -555,11 +555,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
MLX5_EXPANSION_MPLS,
MLX5_EXPANSION_GTP),
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
},
[MLX5_EXPANSION_OUTER_IPV6_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
},
[MLX5_EXPANSION_VXLAN] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_ETH,
@@ -612,32 +612,32 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4_UDP,
MLX5_EXPANSION_IPV4_TCP),
.type = RTE_FLOW_ITEM_TYPE_IPV4,
- .rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
- ETH_RSS_NONFRAG_IPV4_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
},
[MLX5_EXPANSION_IPV4_UDP] = {
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
},
[MLX5_EXPANSION_IPV4_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
},
[MLX5_EXPANSION_IPV6] = {
.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV6_UDP,
MLX5_EXPANSION_IPV6_TCP,
MLX5_EXPANSION_IPV6_FRAG_EXT),
.type = RTE_FLOW_ITEM_TYPE_IPV6,
- .rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
- ETH_RSS_NONFRAG_IPV6_OTHER,
+ .rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
},
[MLX5_EXPANSION_IPV6_UDP] = {
.type = RTE_FLOW_ITEM_TYPE_UDP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
},
[MLX5_EXPANSION_IPV6_TCP] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
- .rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+ .rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
},
[MLX5_EXPANSION_IPV6_FRAG_EXT] = {
.type = RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
@@ -1048,7 +1048,7 @@ mlx5_flow_item_acceptable(const struct rte_flow_item *item,
* @param[in] tunnel
* 1 when the hash field is for a tunnel item.
* @param[in] layer_types
- * ETH_RSS_* types.
+ * RTE_ETH_RSS_* types.
* @param[in] hash_fields
* Item hash fields.
*
@@ -1601,14 +1601,14 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev,
&rss->types,
"some RSS protocols are not"
" supported");
- if ((rss->types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) &&
- !(rss->types & ETH_RSS_IP))
+ if ((rss->types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) &&
+ !(rss->types & RTE_ETH_RSS_IP))
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
"L3 partial RSS requested but L3 RSS"
" type not specified");
- if ((rss->types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) &&
- !(rss->types & (ETH_RSS_UDP | ETH_RSS_TCP)))
+ if ((rss->types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) &&
+ !(rss->types & (RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP)))
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
"L4 partial RSS requested but L4 RSS"
@@ -6364,8 +6364,8 @@ flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type,
* mlx5_flow_hashfields_adjust() in advance.
*/
rss_desc->level = rss->level;
- /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
- rss_desc->types = !rss->types ? ETH_RSS_IP : rss->types;
+ /* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+ rss_desc->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
}
flow->dev_handles = 0;
if (rss && rss->types) {
@@ -6989,7 +6989,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev,
if (!priv->reta_idx_n || !priv->rxqs_n) {
return 0;
}
- if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+ if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
action_rss.types = 0;
for (i = 0; i != priv->reta_idx_n; ++i)
queue[i] = (*priv->reta_idx)[i];
@@ -8657,7 +8657,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF,
NULL, "invalid port configuration");
- if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+ if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
ctx->action_rss.types = 0;
for (i = 0; i != priv->reta_idx_n; ++i)
ctx->queue[i] = (*priv->reta_idx)[i];
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 76ad53f2a1e8..d5d3a89374fe 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -328,18 +328,18 @@ enum mlx5_feature_name {
/* Valid layer type for IPV4 RSS. */
#define MLX5_IPV4_LAYER_TYPES \
- (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_OTHER)
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
/* IBV hash source bits for IPV4. */
#define MLX5_IPV4_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV4 | IBV_RX_HASH_DST_IPV4)
/* Valid layer type for IPV6 RSS. */
#define MLX5_IPV6_LAYER_TYPES \
- (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_EX | ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX | ETH_RSS_NONFRAG_IPV6_OTHER)
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX | RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
/* IBV hash source bits for IPV6. */
#define MLX5_IPV6_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV6 | IBV_RX_HASH_DST_IPV6)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 3f6f5dcfbadb..02a337dc2c93 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -10934,9 +10934,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV4)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV4))) {
if (rss_types & MLX5_IPV4_LAYER_TYPES) {
- if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV4;
- else if (rss_types & ETH_RSS_L3_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV4;
else
dev_flow->hash_fields |= MLX5_IPV4_IBV_RX_HASH;
@@ -10944,9 +10944,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV6)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV6))) {
if (rss_types & MLX5_IPV6_LAYER_TYPES) {
- if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV6;
- else if (rss_types & ETH_RSS_L3_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV6;
else
dev_flow->hash_fields |= MLX5_IPV6_IBV_RX_HASH;
@@ -10960,11 +10960,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
return;
if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_UDP)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_UDP))) {
- if (rss_types & ETH_RSS_UDP) {
- if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_UDP) {
+ if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_SRC_PORT_UDP;
- else if (rss_types & ETH_RSS_L4_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_DST_PORT_UDP;
else
@@ -10972,11 +10972,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
}
} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_TCP)) ||
(!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_TCP))) {
- if (rss_types & ETH_RSS_TCP) {
- if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ if (rss_types & RTE_ETH_RSS_TCP) {
+ if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_SRC_PORT_TCP;
- else if (rss_types & ETH_RSS_L4_DST_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
dev_flow->hash_fields |=
IBV_RX_HASH_DST_PORT_TCP;
else
@@ -14495,9 +14495,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV4:
if (rss_types & MLX5_IPV4_LAYER_TYPES) {
*hash_field &= ~MLX5_RSS_HASH_IPV4;
- if (rss_types & ETH_RSS_L3_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_IPV4;
- else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_IPV4;
else
*hash_field |= MLX5_RSS_HASH_IPV4;
@@ -14506,9 +14506,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV6:
if (rss_types & MLX5_IPV6_LAYER_TYPES) {
*hash_field &= ~MLX5_RSS_HASH_IPV6;
- if (rss_types & ETH_RSS_L3_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_IPV6;
- else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_IPV6;
else
*hash_field |= MLX5_RSS_HASH_IPV6;
@@ -14517,11 +14517,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV4_UDP:
/* fall-through. */
case MLX5_RSS_HASH_IPV6_UDP:
- if (rss_types & ETH_RSS_UDP) {
+ if (rss_types & RTE_ETH_RSS_UDP) {
*hash_field &= ~MLX5_UDP_IBV_RX_HASH;
- if (rss_types & ETH_RSS_L4_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_PORT_UDP;
- else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_PORT_UDP;
else
*hash_field |= MLX5_UDP_IBV_RX_HASH;
@@ -14530,11 +14530,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
case MLX5_RSS_HASH_IPV4_TCP:
/* fall-through. */
case MLX5_RSS_HASH_IPV6_TCP:
- if (rss_types & ETH_RSS_TCP) {
+ if (rss_types & RTE_ETH_RSS_TCP) {
*hash_field &= ~MLX5_TCP_IBV_RX_HASH;
- if (rss_types & ETH_RSS_L4_DST_ONLY)
+ if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
*hash_field |= IBV_RX_HASH_DST_PORT_TCP;
- else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+ else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
*hash_field |= IBV_RX_HASH_SRC_PORT_TCP;
else
*hash_field |= MLX5_TCP_IBV_RX_HASH;
@@ -14682,8 +14682,8 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev,
origin = &shared_rss->origin;
origin->func = rss->func;
origin->level = rss->level;
- /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
- origin->types = !rss->types ? ETH_RSS_IP : rss->types;
+ /* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+ origin->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
/* NULL RSS key indicates default RSS key. */
rss_key = !rss->key ? rss_hash_default_key : rss->key;
memcpy(shared_rss->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index b93fd4d2c962..ef286a13729c 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1834,7 +1834,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
if (dev_flow->hash_fields != 0)
dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
- (rss_desc, tunnel, ETH_RSS_TCP,
+ (rss_desc, tunnel, RTE_ETH_RSS_TCP,
(IBV_RX_HASH_SRC_PORT_TCP |
IBV_RX_HASH_DST_PORT_TCP));
item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
@@ -1847,7 +1847,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
if (dev_flow->hash_fields != 0)
dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
- (rss_desc, tunnel, ETH_RSS_UDP,
+ (rss_desc, tunnel, RTE_ETH_RSS_UDP,
(IBV_RX_HASH_SRC_PORT_UDP |
IBV_RX_HASH_DST_PORT_UDP));
item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
--git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c
index c32129cdc2b8..1ee014776643 100644
--- a/drivers/net/mlx5/mlx5_rss.c
+++ b/drivers/net/mlx5/mlx5_rss.c
@@ -68,7 +68,7 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev,
if (!(*priv->rxqs)[i])
continue;
(*priv->rxqs)[i]->rss_hash = !!rss_conf->rss_hf &&
- !!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS);
+ !!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS);
++idx;
}
return 0;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index abd8ce798986..0d6c58f47d89 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -333,23 +333,23 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_dev_config *config = &priv->config;
- uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_RSS_HASH);
+ uint64_t offloads = (RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
if (!config->mprq.enabled)
offloads |= RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT;
if (config->hw_fcs_strip)
- offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
if (config->hw_csum)
- offloads |= (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM);
+ offloads |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
if (config->hw_vlan_strip)
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (MLX5_LRO_SUPPORTED(dev))
- offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
return offloads;
}
@@ -363,7 +363,7 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
uint64_t
mlx5_get_rx_port_offloads(void)
{
- uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+ uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
return offloads;
}
@@ -695,7 +695,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
dev->data->dev_conf.rxmode.offloads;
/* The offloads should be checked on rte_eth_dev layer. */
- MLX5_ASSERT(offloads & DEV_RX_OFFLOAD_SCATTER);
+ MLX5_ASSERT(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
DRV_LOG(ERR, "port %u queue index %u split "
"offload not configured",
@@ -1329,7 +1329,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mlx5_dev_config *config = &priv->config;
uint64_t offloads = conf->offloads |
dev->data->dev_conf.rxmode.offloads;
- unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
+ unsigned int lro_on_queue = !!(offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO);
unsigned int max_rx_pkt_len = lro_on_queue ?
dev->data->dev_conf.rxmode.max_lro_pkt_size :
dev->data->dev_conf.rxmode.max_rx_pkt_len;
@@ -1431,7 +1431,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
} while (tail_len || !rte_is_power_of_2(tmpl->rxq.rxseg_n));
MLX5_ASSERT(tmpl->rxq.rxseg_n &&
tmpl->rxq.rxseg_n <= MLX5_MAX_RXQ_NSEG);
- if (tmpl->rxq.rxseg_n > 1 && !(offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ if (tmpl->rxq.rxseg_n > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
" configured and no enough mbuf space(%u) to contain "
"the maximum RX packet length(%u) with head-room(%u)",
@@ -1475,7 +1475,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
config->mprq.stride_size_n : mprq_stride_size;
tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT;
tmpl->rxq.strd_scatter_en =
- !!(offloads & DEV_RX_OFFLOAD_SCATTER);
+ !!(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
config->mprq.max_memcpy_len);
max_lro_size = RTE_MIN(max_rx_pkt_len,
@@ -1490,7 +1490,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size);
tmpl->rxq.sges_n = 0;
max_lro_size = max_rx_pkt_len;
- } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+ } else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
unsigned int sges_n;
if (lro_on_queue && first_mb_free_size <
@@ -1551,9 +1551,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
}
mlx5_max_lro_msg_size_adjust(dev, idx, max_lro_size);
/* Toggle RX checksum offload if hardware supports it. */
- tmpl->rxq.csum = !!(offloads & DEV_RX_OFFLOAD_CHECKSUM);
+ tmpl->rxq.csum = !!(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM);
/* Configure Rx timestamp. */
- tmpl->rxq.hw_timestamp = !!(offloads & DEV_RX_OFFLOAD_TIMESTAMP);
+ tmpl->rxq.hw_timestamp = !!(offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP);
tmpl->rxq.timestamp_rx_flag = 0;
if (tmpl->rxq.hw_timestamp && rte_mbuf_dyn_rx_timestamp_register(
&tmpl->rxq.timestamp_offset,
@@ -1562,11 +1562,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
goto error;
}
/* Configure VLAN stripping. */
- tmpl->rxq.vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ tmpl->rxq.vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
/* By default, FCS (CRC) is stripped by hardware. */
tmpl->rxq.crc_present = 0;
tmpl->rxq.lro = lro_on_queue;
- if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
if (config->hw_fcs_strip) {
/*
* RQs used for LRO-enabled TIRs should not be
@@ -1596,7 +1596,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
tmpl->rxq.crc_present << 2);
/* Save port ID. */
tmpl->rxq.rss_hash = !!priv->rss_conf.rss_hf &&
- (!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS));
+ (!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS));
tmpl->rxq.port_id = dev->data->port_id;
tmpl->priv = priv;
tmpl->rxq.mp = rx_seg[0].mp;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.h b/drivers/net/mlx5/mlx5_rxtx_vec.h
index 93b4f517bb3e..65d91bdf67e2 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.h
@@ -16,10 +16,10 @@
/* HW checksum offload capabilities of vectorized Tx. */
#define MLX5_VEC_TX_CKSUM_OFFLOAD_CAP \
- (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
/*
* Compile time sanity check for vectorized functions.
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index df671379e46d..12aeba60348a 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -523,36 +523,36 @@ mlx5_select_tx_function(struct rte_eth_dev *dev)
unsigned int diff = 0, olx = 0, i, m;
MLX5_ASSERT(priv);
- if (tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
/* We should support Multi-Segment Packets. */
olx |= MLX5_TXOFF_CONFIG_MULTI;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO)) {
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO)) {
/* We should support TCP Send Offload. */
olx |= MLX5_TXOFF_CONFIG_TSO;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
/* We should support Software Parser for Tunnels. */
olx |= MLX5_TXOFF_CONFIG_SWP;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
/* We should support IP/TCP/UDP Checksums. */
olx |= MLX5_TXOFF_CONFIG_CSUM;
}
- if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) {
/* We should support VLAN insertion. */
olx |= MLX5_TXOFF_CONFIG_VLAN;
}
- if (tx_offloads & DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
rte_mbuf_dynflag_lookup
(RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME, NULL) >= 0 &&
rte_mbuf_dynfield_lookup
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index eb4d34ca559e..06cdeba662bc 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -98,35 +98,35 @@ uint64_t
mlx5_get_tx_port_offloads(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
- uint64_t offloads = (DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT);
+ uint64_t offloads = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
struct mlx5_dev_config *config = &priv->config;
if (config->hw_csum)
- offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM);
+ offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
if (config->tso)
- offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+ offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (config->tx_pp)
- offloads |= DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP;
+ offloads |= RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP;
if (config->swp) {
if (config->hw_csum)
- offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (config->tso)
- offloads |= (DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ offloads |= (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
}
if (config->tunnel_en) {
if (config->hw_csum)
- offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (config->tso)
- offloads |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO);
+ offloads |= (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO);
}
if (!config->mprq.enabled)
- offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
return offloads;
}
@@ -801,17 +801,17 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
unsigned int inlen_mode; /* Minimal required Inline data. */
unsigned int txqs_inline; /* Min Tx queues to enable inline. */
uint64_t dev_txoff = priv->dev_data->dev_conf.txmode.offloads;
- bool tso = txq_ctrl->txq.offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO);
+ bool tso = txq_ctrl->txq.offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
bool vlan_inline;
unsigned int temp;
txq_ctrl->txq.fast_free =
- !!((txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
- !(txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MULTI_SEGS) &&
+ !!((txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
+ !(txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) &&
!config->mprq.enabled);
if (config->txqs_inline == MLX5_ARG_UNSET)
txqs_inline =
@@ -870,7 +870,7 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
* tx_burst routine.
*/
txq_ctrl->txq.vlan_en = config->hw_vlan_insert;
- vlan_inline = (dev_txoff & DEV_TX_OFFLOAD_VLAN_INSERT) &&
+ vlan_inline = (dev_txoff & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) &&
!config->hw_vlan_insert;
/*
* If there are few Tx queues it is prioritized
@@ -979,9 +979,9 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
txq_ctrl->txq.tso_en = 1;
}
txq_ctrl->txq.tunnel_en = config->tunnel_en | config->swp;
- txq_ctrl->txq.swp_en = ((DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) &
+ txq_ctrl->txq.swp_en = ((RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) &
txq_ctrl->txq.offloads) && config->swp;
}
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 60f97f2d2d1f..07792fc5d94f 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -142,9 +142,9 @@ mlx5_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct mlx5_priv *priv = dev->data->dev_private;
unsigned int i;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
int hw_vlan_strip = !!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_STRIP);
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
if (!priv->config.hw_vlan_strip) {
DRV_LOG(ERR, "port %u VLAN stripping is not supported",
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 7e1df1c75147..578816fe0513 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -464,8 +464,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
* Remove this check once DPDK supports larger/variable
* indirection tables.
*/
- if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
- config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+ if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+ config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
config->ind_table_max_size);
DRV_LOG(DEBUG, "VLAN stripping is %ssupported",
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index a3ee15020466..37803fe34538 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -114,7 +114,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
struct mvneta_priv *priv = dev->data->dev_private;
struct neta_ppio_params *ppio_params;
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE) {
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE) {
MVNETA_LOG(INFO, "Unsupported RSS and rx multi queue mode %d",
dev->data->dev_conf.rxmode.mq_mode);
if (dev->data->nb_rx_queues > 1)
@@ -126,11 +126,11 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
MRVL_NETA_ETH_HDRS_LEN;
- if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
priv->multiseg = 1;
ppio_params = &priv->ppio_params;
@@ -155,10 +155,10 @@ static int
mvneta_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_dev_info *info)
{
- info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G;
+ info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G;
info->max_rx_queues = MRVL_NETA_RXQ_MAX;
info->max_tx_queues = MRVL_NETA_TXQ_MAX;
@@ -510,28 +510,28 @@ mvneta_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
switch (ethtool_cmd_speed(&edata)) {
case SPEED_10:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case SPEED_100:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case SPEED_1000:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case SPEED_2500:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
default:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
}
- dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
- dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
- ETH_LINK_FIXED;
+ dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
+ dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+ RTE_ETH_LINK_FIXED;
neta_ppio_get_link_state(priv->ppio, &link_up);
- dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
return 0;
}
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index ef8067790f82..ccd47e8f4927 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,15 +54,15 @@
#define MRVL_NETA_MRU_TO_MTU(mru) ((mru) - MRVL_NETA_HDRS_LEN)
/** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_CHECKSUM)
/** Tx offloads capabilities */
-#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MVNETA_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define MVNETA_TX_OFFLOADS (MVNETA_TX_OFFLOAD_CHECKSUM | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define MVNETA_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
PKT_TX_TCP_CKSUM | \
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index dfa7ecc09039..d28125ce9635 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -735,7 +735,7 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
rxq->priv = priv;
rxq->mp = mp;
rxq->cksum_enabled = dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_IPV4_CKSUM;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
rxq->queue_id = idx;
rxq->port_id = dev->data->port_id;
rxq->size = desc;
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 078aefbb8da4..539e196b807e 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -58,16 +58,16 @@
#define MRVL_COOKIE_HIGH_ADDR_MASK 0xffffff0000000000
/** Port Rx offload capabilities */
-#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_CHECKSUM)
+#define MRVL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_CHECKSUM)
/** Port Tx offloads capabilities */
-#define MRVL_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MRVL_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
#define MRVL_TX_OFFLOADS (MRVL_TX_OFFLOAD_CHECKSUM | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define MRVL_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
PKT_TX_TCP_CKSUM | \
@@ -443,14 +443,14 @@ mrvl_configure_rss(struct mrvl_priv *priv, struct rte_eth_rss_conf *rss_conf)
if (rss_conf->rss_hf == 0) {
priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
- } else if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+ } else if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
priv->ppio_params.inqs_params.hash_type =
PP2_PPIO_HASH_T_2_TUPLE;
- } else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+ } else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
priv->ppio_params.inqs_params.hash_type =
PP2_PPIO_HASH_T_5_TUPLE;
priv->rss_hf_tcp = 1;
- } else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+ } else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
priv->ppio_params.inqs_params.hash_type =
PP2_PPIO_HASH_T_5_TUPLE;
priv->rss_hf_tcp = 0;
@@ -484,8 +484,8 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE &&
- dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE &&
+ dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
MRVL_LOG(INFO, "Unsupported rx multi queue mode %d",
dev->data->dev_conf.rxmode.mq_mode);
return -EINVAL;
@@ -496,7 +496,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
MRVL_PP2_ETH_HDRS_LEN;
if (dev->data->mtu > priv->max_mtu) {
@@ -508,7 +508,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
}
}
- if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
priv->multiseg = 1;
ret = mrvl_configure_rxqs(priv, dev->data->port_id,
@@ -530,7 +530,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return ret;
if (dev->data->nb_rx_queues == 1 &&
- dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
MRVL_LOG(WARNING, "Disabling hash for 1 rx queue");
priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
priv->configured = 1;
@@ -632,7 +632,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
int ret;
if (!priv->ppio) {
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -653,7 +653,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
return ret;
}
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -673,14 +673,14 @@ mrvl_dev_set_link_down(struct rte_eth_dev *dev)
int ret;
if (!priv->ppio) {
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
ret = pp2_ppio_disable(priv->ppio);
if (ret)
return ret;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -902,7 +902,7 @@ mrvl_dev_start(struct rte_eth_dev *dev)
if (dev->data->all_multicast == 1)
mrvl_allmulticast_enable(dev);
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
ret = mrvl_populate_vlan_table(dev, 1);
if (ret) {
MRVL_LOG(ERR, "Failed to populate VLAN table");
@@ -938,11 +938,11 @@ mrvl_dev_start(struct rte_eth_dev *dev)
priv->flow_ctrl = 0;
}
- if (dev->data->dev_link.link_status == ETH_LINK_UP) {
+ if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
ret = mrvl_dev_set_link_up(dev);
if (ret) {
MRVL_LOG(ERR, "Failed to set link up");
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
goto out;
}
}
@@ -1211,30 +1211,30 @@ mrvl_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
switch (ethtool_cmd_speed(&edata)) {
case SPEED_10:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
break;
case SPEED_100:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case SPEED_1000:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case SPEED_2500:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case SPEED_10000:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
default:
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
}
- dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
- ETH_LINK_HALF_DUPLEX;
- dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
- ETH_LINK_FIXED;
+ dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+ RTE_ETH_LINK_HALF_DUPLEX;
+ dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+ RTE_ETH_LINK_FIXED;
pp2_ppio_get_link_state(priv->ppio, &link_up);
- dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
return 0;
}
@@ -1718,11 +1718,11 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
{
struct mrvl_priv *priv = dev->data->dev_private;
- info->speed_capa = ETH_LINK_SPEED_10M |
- ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_2_5G |
- ETH_LINK_SPEED_10G;
+ info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+ RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_2_5G |
+ RTE_ETH_LINK_SPEED_10G;
info->max_rx_queues = MRVL_PP2_RXQ_MAX;
info->max_tx_queues = MRVL_PP2_TXQ_MAX;
@@ -1742,9 +1742,9 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
info->tx_offload_capa = MRVL_TX_OFFLOADS;
info->tx_queue_offload_capa = MRVL_TX_OFFLOADS;
- info->flow_type_rss_offloads = ETH_RSS_IPV4 |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_UDP;
+ info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP;
/* By default packets are dropped if no descriptors are available */
info->default_rxconf.rx_drop_en = 1;
@@ -1873,13 +1873,13 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
int ret;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
MRVL_LOG(ERR, "VLAN stripping is not supported\n");
return -ENOTSUP;
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
ret = mrvl_populate_vlan_table(dev, 1);
else
ret = mrvl_populate_vlan_table(dev, 0);
@@ -1888,7 +1888,7 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return ret;
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
MRVL_LOG(ERR, "Extend VLAN not supported\n");
return -ENOTSUP;
}
@@ -2033,7 +2033,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
rxq->priv = priv;
rxq->mp = mp;
- rxq->cksum_enabled = offloads & DEV_RX_OFFLOAD_IPV4_CKSUM;
+ rxq->cksum_enabled = offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
rxq->queue_id = idx;
rxq->port_id = dev->data->port_id;
mrvl_port_to_bpool_lookup[rxq->port_id] = priv->bpool;
@@ -2189,7 +2189,7 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
return ret;
}
- fc_conf->mode = en ? RTE_FC_RX_PAUSE : RTE_FC_NONE;
+ fc_conf->mode = en ? RTE_ETH_FC_RX_PAUSE : RTE_ETH_FC_NONE;
ret = pp2_ppio_get_tx_pause(priv->ppio, &en);
if (ret) {
@@ -2198,10 +2198,10 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
if (en) {
- if (fc_conf->mode == RTE_FC_NONE)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ if (fc_conf->mode == RTE_ETH_FC_NONE)
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
}
return 0;
@@ -2247,19 +2247,19 @@ mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
switch (fc_conf->mode) {
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
rx_en = 1;
tx_en = 1;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
rx_en = 0;
tx_en = 1;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
rx_en = 1;
tx_en = 0;
break;
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
rx_en = 0;
tx_en = 0;
break;
@@ -2336,11 +2336,11 @@ mrvl_rss_hash_conf_get(struct rte_eth_dev *dev,
if (hash_type == PP2_PPIO_HASH_T_NONE)
rss_conf->rss_hf = 0;
else if (hash_type == PP2_PPIO_HASH_T_2_TUPLE)
- rss_conf->rss_hf = ETH_RSS_IPV4;
+ rss_conf->rss_hf = RTE_ETH_RSS_IPV4;
else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && priv->rss_hf_tcp)
- rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_TCP;
else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && !priv->rss_hf_tcp)
- rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP;
return 0;
}
@@ -3159,7 +3159,7 @@ mrvl_eth_dev_create(struct rte_vdev_device *vdev, const char *name)
eth_dev->dev_ops = &mrvl_ops;
eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
rte_eth_dev_probing_finish(eth_dev);
return 0;
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9e2a40597349..15645f1e5d2a 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -40,16 +40,16 @@
#include "hn_nvs.h"
#include "ndis.h"
-#define HN_TX_OFFLOAD_CAPS (DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_VLAN_INSERT)
+#define HN_TX_OFFLOAD_CAPS (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
-#define HN_RX_OFFLOAD_CAPS (DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_RSS_HASH)
+#define HN_RX_OFFLOAD_CAPS (RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define NETVSC_ARG_LATENCY "latency"
#define NETVSC_ARG_RXBREAK "rx_copybreak"
@@ -238,21 +238,21 @@ hn_dev_link_update(struct rte_eth_dev *dev,
hn_rndis_get_linkspeed(hv);
link = (struct rte_eth_link) {
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_autoneg = ETH_LINK_SPEED_FIXED,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_autoneg = RTE_ETH_LINK_SPEED_FIXED,
.link_speed = hv->link_speed / 10000,
};
if (hv->link_status == NDIS_MEDIA_STATE_CONNECTED)
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
else
- link.link_status = ETH_LINK_DOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
if (old.link_status == link.link_status)
return 0;
PMD_INIT_LOG(DEBUG, "Port %d is %s", dev->data->port_id,
- (link.link_status == ETH_LINK_UP) ? "up" : "down");
+ (link.link_status == RTE_ETH_LINK_UP) ? "up" : "down");
return rte_eth_linkstatus_set(dev, &link);
}
@@ -263,14 +263,14 @@ static int hn_dev_info_get(struct rte_eth_dev *dev,
struct hn_data *hv = dev->data->dev_private;
int rc;
- dev_info->speed_capa = ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
dev_info->min_rx_bufsize = HN_MIN_RX_BUF_SIZE;
dev_info->max_rx_pktlen = HN_MAX_XFER_LEN;
dev_info->max_mac_addrs = 1;
dev_info->hash_key_size = NDIS_HASH_KEYSIZE_TOEPLITZ;
dev_info->flow_type_rss_offloads = hv->rss_offloads;
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->max_rx_queues = hv->max_queues;
dev_info->max_tx_queues = hv->max_queues;
@@ -362,17 +362,17 @@ static void hn_rss_hash_init(struct hn_data *hv,
/* Convert from DPDK RSS hash flags to NDIS hash flags */
hv->rss_hash = NDIS_HASH_FUNCTION_TOEPLITZ;
- if (rss_conf->rss_hf & ETH_RSS_IPV4)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
hv->rss_hash |= NDIS_HASH_IPV4;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
hv->rss_hash |= NDIS_HASH_TCP_IPV4;
- if (rss_conf->rss_hf & ETH_RSS_IPV6)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
hv->rss_hash |= NDIS_HASH_IPV6;
- if (rss_conf->rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX)
hv->rss_hash |= NDIS_HASH_IPV6_EX;
- if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
hv->rss_hash |= NDIS_HASH_TCP_IPV6;
- if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
hv->rss_hash |= NDIS_HASH_TCP_IPV6_EX;
memcpy(hv->rss_key, rss_conf->rss_key ? : rss_default_key,
@@ -427,22 +427,22 @@ static int hn_rss_hash_conf_get(struct rte_eth_dev *dev,
rss_conf->rss_hf = 0;
if (hv->rss_hash & NDIS_HASH_IPV4)
- rss_conf->rss_hf |= ETH_RSS_IPV4;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
if (hv->rss_hash & NDIS_HASH_TCP_IPV4)
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (hv->rss_hash & NDIS_HASH_IPV6)
- rss_conf->rss_hf |= ETH_RSS_IPV6;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
if (hv->rss_hash & NDIS_HASH_IPV6_EX)
- rss_conf->rss_hf |= ETH_RSS_IPV6_EX;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_EX;
if (hv->rss_hash & NDIS_HASH_TCP_IPV6)
- rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (hv->rss_hash & NDIS_HASH_TCP_IPV6_EX)
- rss_conf->rss_hf |= ETH_RSS_IPV6_TCP_EX;
+ rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
return 0;
}
@@ -686,8 +686,8 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev_conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev_conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
unsupported = txmode->offloads & ~HN_TX_OFFLOAD_CAPS;
if (unsupported) {
@@ -705,7 +705,7 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- hv->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ hv->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
err = hn_rndis_conf_offload(hv, txmode->offloads,
rxmode->offloads);
diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c
index e3f7e636d731..cacb30385404 100644
--- a/drivers/net/netvsc/hn_rndis.c
+++ b/drivers/net/netvsc/hn_rndis.c
@@ -710,15 +710,15 @@ hn_rndis_query_rsscaps(struct hn_data *hv,
hv->rss_offloads = 0;
if (caps.ndis_caps & NDIS_RSS_CAP_IPV4)
- hv->rss_offloads |= ETH_RSS_IPV4
- | ETH_RSS_NONFRAG_IPV4_TCP
- | ETH_RSS_NONFRAG_IPV4_UDP;
+ hv->rss_offloads |= RTE_ETH_RSS_IPV4
+ | RTE_ETH_RSS_NONFRAG_IPV4_TCP
+ | RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (caps.ndis_caps & NDIS_RSS_CAP_IPV6)
- hv->rss_offloads |= ETH_RSS_IPV6
- | ETH_RSS_NONFRAG_IPV6_TCP;
+ hv->rss_offloads |= RTE_ETH_RSS_IPV6
+ | RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (caps.ndis_caps & NDIS_RSS_CAP_IPV6_EX)
- hv->rss_offloads |= ETH_RSS_IPV6_EX
- | ETH_RSS_IPV6_TCP_EX;
+ hv->rss_offloads |= RTE_ETH_RSS_IPV6_EX
+ | RTE_ETH_RSS_IPV6_TCP_EX;
/* Commit! */
*rxr_cnt0 = rxr_cnt;
@@ -800,7 +800,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
params.ndis_hdr.ndis_size = NDIS_OFFLOAD_PARAMS_SIZE;
}
- if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_TCP4)
params.ndis_tcp4csum = NDIS_OFFLOAD_PARAM_TX;
else
@@ -812,7 +812,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) {
if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4)
== NDIS_RXCSUM_CAP_TCP4)
params.ndis_tcp4csum |= NDIS_OFFLOAD_PARAM_RX;
@@ -826,7 +826,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4)
params.ndis_udp4csum = NDIS_OFFLOAD_PARAM_TX;
else
@@ -839,7 +839,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (rx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+ if (rx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4)
params.ndis_udp4csum |= NDIS_OFFLOAD_PARAM_RX;
else
@@ -851,21 +851,21 @@ int hn_rndis_conf_offload(struct hn_data *hv,
goto unsupported;
}
- if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_IP4)
== NDIS_TXCSUM_CAP_IP4)
params.ndis_ip4csum = NDIS_OFFLOAD_PARAM_TX;
else
goto unsupported;
}
- if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
params.ndis_ip4csum |= NDIS_OFFLOAD_PARAM_RX;
else
goto unsupported;
}
- if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
if (hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023)
params.ndis_lsov2_ip4 = NDIS_OFFLOAD_LSOV2_ON;
else
@@ -907,41 +907,41 @@ int hn_rndis_get_offload(struct hn_data *hv,
return error;
}
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_IP4)
== HN_NDIS_TXCSUM_CAP_IP4)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_TCP4)
== HN_NDIS_TXCSUM_CAP_TCP4 &&
(hwcaps.ndis_csum.ndis_ip6_txcsum & HN_NDIS_TXCSUM_CAP_TCP6)
== HN_NDIS_TXCSUM_CAP_TCP6)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4) &&
(hwcaps.ndis_csum.ndis_ip6_txcsum & NDIS_TXCSUM_CAP_UDP6))
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_UDP_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
if ((hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023) &&
(hwcaps.ndis_lsov2.ndis_ip6_opts & HN_NDIS_LSOV2_CAP_IP6)
== HN_NDIS_LSOV2_CAP_IP6)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_RSS_HASH;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4) &&
(hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_TCP6))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4) &&
(hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_UDP6))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_UDP_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
return 0;
}
diff --git a/drivers/net/nfb/nfb_ethdev.c b/drivers/net/nfb/nfb_ethdev.c
index 7e91d5984740..c2ff1c999869 100644
--- a/drivers/net/nfb/nfb_ethdev.c
+++ b/drivers/net/nfb/nfb_ethdev.c
@@ -200,7 +200,7 @@ nfb_eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_rx_pktlen = (uint32_t)-1;
dev_info->max_rx_queues = dev->data->nb_rx_queues;
dev_info->max_tx_queues = dev->data->nb_tx_queues;
- dev_info->speed_capa = ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -268,26 +268,26 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
status.speed = MAC_SPEED_UNKNOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_status = ETH_LINK_DOWN;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = ETH_LINK_SPEED_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_SPEED_FIXED;
if (internals->rxmac[0] != NULL) {
nc_rxmac_read_status(internals->rxmac[0], &status);
switch (status.speed) {
case MAC_SPEED_10G:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case MAC_SPEED_40G:
- link.link_speed = ETH_SPEED_NUM_40G;
+ link.link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case MAC_SPEED_100G:
- link.link_speed = ETH_SPEED_NUM_100G;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
break;
default:
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
}
}
@@ -296,7 +296,7 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
nc_rxmac_read_status(internals->rxmac[i], &status);
if (status.enabled && status.link_up) {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
break;
}
}
diff --git a/drivers/net/nfb/nfb_rx.c b/drivers/net/nfb/nfb_rx.c
index d6d4ba9663c6..f19e9834848b 100644
--- a/drivers/net/nfb/nfb_rx.c
+++ b/drivers/net/nfb/nfb_rx.c
@@ -42,7 +42,7 @@ nfb_check_timestamp(struct rte_devargs *devargs)
}
/* Timestamps are enabled when there is
* key-value pair: enable_timestamp=1
- * TODO: timestamp should be enabled with DEV_RX_OFFLOAD_TIMESTAMP
+ * TODO: timestamp should be enabled with RTE_ETH_RX_OFFLOAD_TIMESTAMP
*/
if (rte_kvargs_process(kvlist, TIMESTAMP_ARG,
timestamp_check_handler, NULL) < 0) {
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 1b4bc33593fb..c526c949a64c 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -160,8 +160,8 @@ nfp_net_configure(struct rte_eth_dev *dev)
rxmode = &dev_conf->rxmode;
txmode = &dev_conf->txmode;
- if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
- rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* Checking TX mode */
if (txmode->mq_mode) {
@@ -170,7 +170,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
}
/* Checking RX mode */
- if (rxmode->mq_mode & ETH_MQ_RX_RSS &&
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS &&
!(hw->cap & NFP_NET_CFG_CTRL_RSS)) {
PMD_INIT_LOG(INFO, "RSS not supported");
return -EINVAL;
@@ -359,20 +359,20 @@ nfp_check_offloads(struct rte_eth_dev *dev)
rxmode = &dev_conf->rxmode;
txmode = &dev_conf->txmode;
- if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
ctrl |= NFP_NET_CFG_CTRL_RXCSUM;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
hw->mtu = rxmode->max_rx_pkt_len;
- if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
/* L2 broadcast */
@@ -384,13 +384,13 @@ nfp_check_offloads(struct rte_eth_dev *dev)
ctrl |= NFP_NET_CFG_CTRL_L2MC;
/* TX checksum offload */
- if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
- txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
/* LSO offload */
- if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
if (hw->cap & NFP_NET_CFG_CTRL_LSO)
ctrl |= NFP_NET_CFG_CTRL_LSO;
else
@@ -398,7 +398,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
}
/* RX gather */
- if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
ctrl |= NFP_NET_CFG_CTRL_GATHER;
return ctrl;
@@ -486,14 +486,14 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
int ret;
static const uint32_t ls_to_ethtool[] = {
- [NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = ETH_SPEED_NUM_NONE,
- [NFP_NET_CFG_STS_LINK_RATE_UNKNOWN] = ETH_SPEED_NUM_NONE,
- [NFP_NET_CFG_STS_LINK_RATE_1G] = ETH_SPEED_NUM_1G,
- [NFP_NET_CFG_STS_LINK_RATE_10G] = ETH_SPEED_NUM_10G,
- [NFP_NET_CFG_STS_LINK_RATE_25G] = ETH_SPEED_NUM_25G,
- [NFP_NET_CFG_STS_LINK_RATE_40G] = ETH_SPEED_NUM_40G,
- [NFP_NET_CFG_STS_LINK_RATE_50G] = ETH_SPEED_NUM_50G,
- [NFP_NET_CFG_STS_LINK_RATE_100G] = ETH_SPEED_NUM_100G,
+ [NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = RTE_ETH_SPEED_NUM_NONE,
+ [NFP_NET_CFG_STS_LINK_RATE_UNKNOWN] = RTE_ETH_SPEED_NUM_NONE,
+ [NFP_NET_CFG_STS_LINK_RATE_1G] = RTE_ETH_SPEED_NUM_1G,
+ [NFP_NET_CFG_STS_LINK_RATE_10G] = RTE_ETH_SPEED_NUM_10G,
+ [NFP_NET_CFG_STS_LINK_RATE_25G] = RTE_ETH_SPEED_NUM_25G,
+ [NFP_NET_CFG_STS_LINK_RATE_40G] = RTE_ETH_SPEED_NUM_40G,
+ [NFP_NET_CFG_STS_LINK_RATE_50G] = RTE_ETH_SPEED_NUM_50G,
+ [NFP_NET_CFG_STS_LINK_RATE_100G] = RTE_ETH_SPEED_NUM_100G,
};
PMD_DRV_LOG(DEBUG, "Link update");
@@ -505,15 +505,15 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
memset(&link, 0, sizeof(struct rte_eth_link));
if (nn_link_status & NFP_NET_CFG_STS_LINK)
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
nn_link_status = (nn_link_status >> NFP_NET_CFG_STS_LINK_RATE_SHIFT) &
NFP_NET_CFG_STS_LINK_RATE_MASK;
if (nn_link_status >= RTE_DIM(ls_to_ethtool))
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
else
link.link_speed = ls_to_ethtool[nn_link_status];
@@ -702,26 +702,26 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mac_addrs = 1;
if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if (hw->cap & NFP_NET_CFG_CTRL_TXCSUM)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
if (hw->cap & NFP_NET_CFG_CTRL_LSO_ANY)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (hw->cap & NFP_NET_CFG_CTRL_GATHER)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -758,25 +758,25 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
};
/* All NFP devices support jumbo frames */
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
- dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
- ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_IPV6 |
- ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_NONFRAG_IPV6_UDP;
+ dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP;
dev_info->reta_size = NFP_NET_CFG_RSS_ITBL_SZ;
dev_info->hash_key_size = NFP_NET_CFG_RSS_KEY_SZ;
}
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+ RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -847,7 +847,7 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
if (link.link_status)
PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
dev->data->port_id, link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX
? "full-duplex" : "half-duplex");
else
PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -964,9 +964,9 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if ((uint32_t)mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
/* update max frame size */
dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)mtu;
@@ -990,12 +990,12 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
new_ctrl = 0;
/* Enable vlan strip if it is not configured yet */
- if ((mask & ETH_VLAN_STRIP_OFFLOAD) &&
+ if ((mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
!(hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
new_ctrl = hw->ctrl | NFP_NET_CFG_CTRL_RXVLAN;
/* Disable vlan strip just if it is configured */
- if (!(mask & ETH_VLAN_STRIP_OFFLOAD) &&
+ if (!(mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
(hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_RXVLAN;
@@ -1155,22 +1155,22 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
rss_hf = rss_conf->rss_hf;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_UDP;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_UDP;
cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK;
@@ -1240,22 +1240,22 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
cfg_rss_ctrl = nn_cfg_readl(hw, NFP_NET_CFG_RSS_CTRL);
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP;
/* Propagate current RSS hash functions to caller */
rss_conf->rss_hf = rss_hf;
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 534a38c14f94..7a6a963bf6cc 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -140,7 +140,7 @@ nfp_net_start(struct rte_eth_dev *dev)
dev_conf = &dev->data->dev_conf;
rxmode = &dev_conf->rxmode;
- if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
nfp_net_rss_config_default(dev);
update |= NFP_NET_CFG_UPDATE_RSS;
new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index b697b55865cc..ac960328c7de 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -101,7 +101,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
dev_conf = &dev->data->dev_conf;
rxmode = &dev_conf->rxmode;
- if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
nfp_net_rss_config_default(dev);
update |= NFP_NET_CFG_UPDATE_RSS;
new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615adfa..fc76b84b5b66 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -409,7 +409,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
dev->data->dev_link.link_status = link_up;
link_speeds = &dev->data->dev_conf.link_speeds;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG)
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG)
negotiate = true;
err = hw->mac.get_link_capabilities(hw, &speed, &negotiate);
@@ -418,11 +418,11 @@ ngbe_dev_start(struct rte_eth_dev *dev)
allowed_speeds = 0;
if (hw->mac.default_speeds & NGBE_LINK_SPEED_1GB_FULL)
- allowed_speeds |= ETH_LINK_SPEED_1G;
+ allowed_speeds |= RTE_ETH_LINK_SPEED_1G;
if (hw->mac.default_speeds & NGBE_LINK_SPEED_100M_FULL)
- allowed_speeds |= ETH_LINK_SPEED_100M;
+ allowed_speeds |= RTE_ETH_LINK_SPEED_100M;
if (hw->mac.default_speeds & NGBE_LINK_SPEED_10M_FULL)
- allowed_speeds |= ETH_LINK_SPEED_10M;
+ allowed_speeds |= RTE_ETH_LINK_SPEED_10M;
if (*link_speeds & ~allowed_speeds) {
PMD_INIT_LOG(ERR, "Invalid link setting");
@@ -430,14 +430,14 @@ ngbe_dev_start(struct rte_eth_dev *dev)
}
speed = 0x0;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
speed = hw->mac.default_speeds;
} else {
- if (*link_speeds & ETH_LINK_SPEED_1G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
speed |= NGBE_LINK_SPEED_1GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_100M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
speed |= NGBE_LINK_SPEED_100M_FULL;
- if (*link_speeds & ETH_LINK_SPEED_10M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
speed |= NGBE_LINK_SPEED_10M_FULL;
}
@@ -653,8 +653,8 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = rx_desc_lim;
dev_info->tx_desc_lim = tx_desc_lim;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_10M;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_10M;
/* Driver-preferred Rx/Tx parameters */
dev_info->default_rxportconf.burst_size = 32;
@@ -682,11 +682,11 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
int wait = 1;
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
link.link_autoneg = !(dev->data->dev_conf.link_speeds &
- ~ETH_LINK_SPEED_AUTONEG);
+ ~RTE_ETH_LINK_SPEED_AUTONEG);
hw->mac.get_link_status = true;
@@ -699,8 +699,8 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
if (err != 0) {
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -708,27 +708,27 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
return rte_eth_linkstatus_set(dev, &link);
intr->flags &= ~NGBE_FLAG_NEED_LINK_CONFIG;
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (link_speed) {
default:
case NGBE_LINK_SPEED_UNKNOWN:
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
break;
case NGBE_LINK_SPEED_10M_FULL:
- link.link_speed = ETH_SPEED_NUM_10M;
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
lan_speed = 0;
break;
case NGBE_LINK_SPEED_100M_FULL:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
lan_speed = 1;
break;
case NGBE_LINK_SPEED_1GB_FULL:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
lan_speed = 2;
break;
}
@@ -912,11 +912,11 @@ ngbe_dev_link_status_print(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
- if (link.link_status == ETH_LINK_UP) {
+ if (link.link_status == RTE_ETH_LINK_UP) {
PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -956,7 +956,7 @@ ngbe_dev_interrupt_action(struct rte_eth_dev *dev)
ngbe_dev_link_update(dev, 0);
/* likely to up */
- if (link.link_status != ETH_LINK_UP)
+ if (link.link_status != RTE_ETH_LINK_UP)
/* handle it 1 sec later, wait it being stable */
timeout = NGBE_LINK_UP_CHECK_TIMEOUT;
/* likely to down */
diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
index 508bafc12a14..789c6b9c4b9a 100644
--- a/drivers/net/null/rte_eth_null.c
+++ b/drivers/net/null/rte_eth_null.c
@@ -61,16 +61,16 @@ struct pmd_internals {
rte_spinlock_t rss_lock;
uint16_t reta_size;
- struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_128 /
+ struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_128 /
RTE_RETA_GROUP_SIZE];
uint8_t rss_key[40]; /**< 40-byte hash key. */
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(eth_null_logtype, NOTICE);
@@ -189,7 +189,7 @@ eth_dev_start(struct rte_eth_dev *dev)
if (dev == NULL)
return -EINVAL;
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -199,7 +199,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
if (dev == NULL)
return 0;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -538,7 +538,7 @@ eth_dev_null_create(struct rte_vdev_device *dev, struct pmd_options *args)
internals->port_id = eth_dev->data->port_id;
rte_eth_random_addr(internals->eth_addr.addr_bytes);
- internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+ internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_RETA_GROUP_SIZE;
rte_memcpy(internals->rss_key, default_rss_key, 40);
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 9f4c0503b4d4..947dabdca2c5 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -158,7 +158,7 @@ octeontx_link_status_print(struct rte_eth_dev *eth_dev,
octeontx_log_info("Port %u: Link Up - speed %u Mbps - %s",
(eth_dev->data->port_id),
link->link_speed,
- link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
else
octeontx_log_info("Port %d: Link Down",
@@ -171,38 +171,38 @@ octeontx_link_status_update(struct octeontx_nic *nic,
{
memset(link, 0, sizeof(*link));
- link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
switch (nic->speed) {
case OCTEONTX_LINK_SPEED_SGMII:
- link->link_speed = ETH_SPEED_NUM_1G;
+ link->link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case OCTEONTX_LINK_SPEED_XAUI:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case OCTEONTX_LINK_SPEED_RXAUI:
case OCTEONTX_LINK_SPEED_10G_R:
- link->link_speed = ETH_SPEED_NUM_10G;
+ link->link_speed = RTE_ETH_SPEED_NUM_10G;
break;
case OCTEONTX_LINK_SPEED_QSGMII:
- link->link_speed = ETH_SPEED_NUM_5G;
+ link->link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case OCTEONTX_LINK_SPEED_40G_R:
- link->link_speed = ETH_SPEED_NUM_40G;
+ link->link_speed = RTE_ETH_SPEED_NUM_40G;
break;
case OCTEONTX_LINK_SPEED_RESERVE1:
case OCTEONTX_LINK_SPEED_RESERVE2:
default:
- link->link_speed = ETH_SPEED_NUM_NONE;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
octeontx_log_err("incorrect link speed %d", nic->speed);
break;
}
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
}
static void
@@ -355,20 +355,20 @@ octeontx_tx_offload_flags(struct rte_eth_dev *eth_dev)
struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
uint16_t flags = 0;
- if (nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= OCCTX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (nic->tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
- nic->tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ nic->tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= OCCTX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(nic->tx_offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= OCCTX_TX_OFFLOAD_MBUF_NOFF_F;
- if (nic->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= OCCTX_TX_MULTI_SEG_F;
return flags;
@@ -380,21 +380,21 @@ octeontx_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
uint16_t flags = 0;
- if (nic->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM))
+ if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= OCCTX_RX_OFFLOAD_CSUM_F;
- if (nic->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= OCCTX_RX_OFFLOAD_CSUM_F;
- if (nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
flags |= OCCTX_RX_MULTI_SEG_F;
eth_dev->data->scattered_rx = 1;
/* If scatter mode is enabled, TX should also be in multi
* seg mode, else memory leak will occur
*/
- nic->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ nic->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
}
return flags;
@@ -423,18 +423,18 @@ octeontx_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
octeontx_log_err("unsupported rx qmode %d", rxmode->mq_mode);
return -EINVAL;
}
- if (!(txmode->offloads & DEV_TX_OFFLOAD_MT_LOCKFREE)) {
+ if (!(txmode->offloads & RTE_ETH_TX_OFFLOAD_MT_LOCKFREE)) {
PMD_INIT_LOG(NOTICE, "cant disable lockfree tx");
- txmode->offloads |= DEV_TX_OFFLOAD_MT_LOCKFREE;
+ txmode->offloads |= RTE_ETH_TX_OFFLOAD_MT_LOCKFREE;
}
- if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
octeontx_log_err("setting link speed/duplex not supported");
return -EINVAL;
}
@@ -534,13 +534,13 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
* when this feature has not been enabled before.
*/
if (data->dev_started && frame_size > buffsz &&
- !(nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ !(nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
octeontx_log_err("Scatter mode is disabled");
return -EINVAL;
}
/* Check <seg size> * <max_seg> >= max_frame */
- if ((nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ if ((nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
(frame_size > buffsz * OCCTX_RX_NB_SEG_MAX))
return -EINVAL;
@@ -553,9 +553,9 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return rc;
if (frame_size > OCCTX_L2_MAX_LEN)
- nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
- nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ nic->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
/* Update max_rx_pkt_len */
data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -582,7 +582,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
/* Setup scatter mode if needed by jumbo */
if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
- nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+ nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
}
@@ -854,10 +854,10 @@ octeontx_dev_info(struct rte_eth_dev *dev,
struct octeontx_nic *nic = octeontx_pmd_priv(dev);
/* Autonegotiation may be disabled */
- dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
- dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_40G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_40G;
/* Min/Max MTU supported */
dev_info->min_rx_bufsize = OCCTX_MIN_FRS;
@@ -1369,7 +1369,7 @@ octeontx_create(struct rte_vdev_device *dev, int port, uint8_t evdev,
nic->ev_ports = 1;
nic->print_flag = -1;
- data->dev_link.link_status = ETH_LINK_DOWN;
+ data->dev_link.link_status = RTE_ETH_LINK_DOWN;
data->dev_started = 0;
data->promiscuous = 0;
data->all_multicast = 0;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index b73515de37ca..7215039507c3 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -55,24 +55,24 @@
#define OCCTX_MAX_MTU (OCCTX_MAX_FRS - OCCTX_L2_OVERHEAD)
#define OCTEONTX_RX_OFFLOADS ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
#define OCTEONTX_TX_OFFLOADS ( \
- DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
- DEV_TX_OFFLOAD_MT_LOCKFREE | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
static inline struct octeontx_nic *
octeontx_pmd_priv(struct rte_eth_dev *dev)
diff --git a/drivers/net/octeontx/octeontx_ethdev_ops.c b/drivers/net/octeontx/octeontx_ethdev_ops.c
index dbe13ce3826b..6ec2b71b0672 100644
--- a/drivers/net/octeontx/octeontx_ethdev_ops.c
+++ b/drivers/net/octeontx/octeontx_ethdev_ops.c
@@ -43,20 +43,20 @@ octeontx_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
rc = octeontx_vlan_hw_filter(nic, true);
if (rc)
goto done;
- nic->rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
nic->rx_offload_flags |= OCCTX_RX_VLAN_FLTR_F;
} else {
rc = octeontx_vlan_hw_filter(nic, false);
if (rc)
goto done;
- nic->rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ nic->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
nic->rx_offload_flags &= ~OCCTX_RX_VLAN_FLTR_F;
}
}
@@ -139,7 +139,7 @@ octeontx_dev_vlan_offload_init(struct rte_eth_dev *dev)
TAILQ_INIT(&nic->vlan_info.fltr_tbl);
- rc = octeontx_dev_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+ rc = octeontx_dev_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
if (rc)
octeontx_log_err("Failed to set vlan offload rc=%d", rc);
@@ -219,13 +219,13 @@ octeontx_dev_flow_ctrl_get(struct rte_eth_dev *dev,
return rc;
if (conf.rx_pause && conf.tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (conf.rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (conf.tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
/* low_water & high_water values are in Bytes */
fc_conf->low_water = conf.low_water;
@@ -272,10 +272,10 @@ octeontx_dev_flow_ctrl_set(struct rte_eth_dev *dev,
return -EINVAL;
}
- rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_TX_PAUSE);
+ rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
conf.high_water = fc_conf->high_water;
conf.low_water = fc_conf->low_water;
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 75d4cabf2e7c..ebe503438144 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -21,7 +21,7 @@ nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
if (otx2_dev_is_vf(dev) ||
dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG)
- capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
return capa;
}
@@ -33,10 +33,10 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
/* TSO not supported for earlier chip revisions */
if (otx2_dev_is_96xx_A0(dev) || otx2_dev_is_95xx_Ax(dev))
- capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO);
+ capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
return capa;
}
@@ -66,8 +66,8 @@ nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
req->npa_func = otx2_npa_pf_func_get();
req->sso_func = otx2_sso_pf_func_get();
req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
}
@@ -373,7 +373,7 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
aq->rq.sso_ena = 0;
- if (rxq->offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
aq->rq.ipsech_ena = 1;
aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
@@ -664,7 +664,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
* These are needed in deriving raw clock value from tsc counter.
* read_clock eth op returns raw clock value.
*/
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
otx2_ethdev_is_ptp_en(dev)) {
rc = otx2_nix_raw_clock_tsc_conv(dev);
if (rc) {
@@ -691,7 +691,7 @@ nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
* Maximum three segments can be supported with W8, Choose
* NIX_MAXSQESZ_W16 for multi segment offload.
*/
- if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
return NIX_MAXSQESZ_W16;
else
return NIX_MAXSQESZ_W8;
@@ -706,29 +706,29 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
struct rte_eth_rxmode *rxmode = &conf->rxmode;
uint16_t flags = 0;
- if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
- (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+ (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
flags |= NIX_RX_OFFLOAD_RSS_F;
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM))
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if (dev->rx_offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP))
+ if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP))
flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
flags |= NIX_RX_OFFLOAD_SECURITY_F;
if (!dev->ptype_disable)
@@ -767,43 +767,43 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
- if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
- conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+ conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
- if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
- if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
- conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
- conf & DEV_TX_OFFLOAD_UDP_CKSUM ||
- conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+ conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
- if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
- if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
flags |= NIX_TX_MULTI_SEG_F;
/* Enable Inner checksum for TSO */
- if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+ if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
flags |= (NIX_TX_OFFLOAD_TSO_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
/* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO))
+ if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
flags |= (NIX_TX_OFFLOAD_TSO_F |
NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
NIX_TX_OFFLOAD_L3_L4_CSUM_F);
- if (conf & DEV_TX_OFFLOAD_SECURITY)
+ if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
flags |= NIX_TX_OFFLOAD_SECURITY_F;
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
flags |= NIX_TX_OFFLOAD_TSTAMP_F;
return flags;
@@ -913,8 +913,8 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
- dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
- dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+ dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Setting up the rx[tx]_offload_flags due to change
* in rx[tx]_offloads.
@@ -1857,21 +1857,21 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto fail_configure;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
goto fail_configure;
}
- if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
goto fail_configure;
}
if (otx2_dev_is_Ax(dev) &&
- (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
- ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
- (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+ ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
otx2_err("Outer IP and SCTP checksum unsupported");
goto fail_configure;
}
@@ -2244,7 +2244,7 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
* enabled in PF owning this VF
*/
memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
otx2_ethdev_is_ptp_en(dev))
otx2_nix_timesync_enable(eth_dev);
else
@@ -2573,8 +2573,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
rc = otx2_eth_sec_ctx_create(eth_dev);
if (rc)
goto free_mac_addrs;
- dev->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
- dev->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY;
+ dev->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
+ dev->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SECURITY;
/* Initialize rte-flow */
rc = otx2_flow_init(dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7871e3d30bda..04e43b63c192 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -117,44 +117,44 @@
#define CQ_TIMER_THRESH_DEFAULT 0xAULL /* ~1usec i.e (0xA * 100nsec) */
#define CQ_TIMER_THRESH_MAX 255
-#define NIX_RSS_L3_L4_SRC_DST (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY \
- | ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define NIX_RSS_L3_L4_SRC_DST (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY \
+ | RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
-#define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
- ETH_RSS_TCP | ETH_RSS_SCTP | \
- ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD | \
- NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | \
- ETH_RSS_C_VLAN)
+#define NIX_RSS_OFFLOAD (RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |\
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | \
+ RTE_ETH_RSS_TUNNEL | RTE_ETH_RSS_L2_PAYLOAD | \
+ NIX_RSS_L3_L4_SRC_DST | RTE_ETH_RSS_LEVEL_MASK | \
+ RTE_ETH_RSS_C_VLAN)
#define NIX_TX_OFFLOAD_CAPA ( \
- DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
- DEV_TX_OFFLOAD_MT_LOCKFREE | \
- DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_QINQ_INSERT | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_SCTP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_IPV4_CKSUM)
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
+ RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
#define NIX_RX_OFFLOAD_CAPA ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_QINQ_STRIP | \
- DEV_RX_OFFLOAD_TIMESTAMP | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP | \
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define NIX_DEFAULT_RSS_CTX_GROUP 0
#define NIX_DEFAULT_RSS_MCAM_IDX -1
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index 83f905315b38..60bf6c3f5f05 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -49,12 +49,12 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
val = atoi(value);
- if (val <= ETH_RSS_RETA_SIZE_64)
- val = ETH_RSS_RETA_SIZE_64;
- else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
- val = ETH_RSS_RETA_SIZE_128;
- else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
- val = ETH_RSS_RETA_SIZE_256;
+ if (val <= RTE_ETH_RSS_RETA_SIZE_64)
+ val = RTE_ETH_RSS_RETA_SIZE_64;
+ else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
+ val = RTE_ETH_RSS_RETA_SIZE_128;
+ else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
+ val = RTE_ETH_RSS_RETA_SIZE_256;
else
val = NIX_RSS_RETA_SIZE;
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 5a4501208e9e..41761085e156 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -29,11 +29,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
* when this feature has not been enabled before.
*/
if (data->dev_started && frame_size > buffsz &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER))
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER))
return -EINVAL;
/* Check <seg size> * <max_seg> >= max_frame */
- if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
(frame_size > buffsz * NIX_RX_NB_SEG_MAX))
return -EINVAL;
@@ -59,9 +59,9 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return rc;
if (frame_size > NIX_L2_MAX_LEN)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
/* Update max_rx_pkt_len */
data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -590,17 +590,17 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
};
/* Auto negotiation disabled */
- devinfo->speed_capa = ETH_LINK_SPEED_FIXED;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
if (!otx2_dev_is_vf_or_sdp(dev) && !otx2_dev_is_lbk(dev)) {
- devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
- ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G;
+ devinfo->speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+ RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G;
/* 50G and 100G to be supported for board version C0
* and above.
*/
if (!otx2_dev_is_Ax(dev))
- devinfo->speed_capa |= ETH_LINK_SPEED_50G |
- ETH_LINK_SPEED_100G;
+ devinfo->speed_capa |= RTE_ETH_LINK_SPEED_50G |
+ RTE_ETH_LINK_SPEED_100G;
}
devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.c b/drivers/net/octeontx2/otx2_ethdev_sec.c
index c2a36883cbf2..e1654ef5b284 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec.c
+++ b/drivers/net/octeontx2/otx2_ethdev_sec.c
@@ -890,8 +890,8 @@ otx2_eth_sec_init(struct rte_eth_dev *eth_dev)
RTE_BUILD_BUG_ON(sa_width < 32 || sa_width > 512 ||
!RTE_IS_POWER_OF_2(sa_width));
- if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+ if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
return 0;
if (rte_security_dynfield_register() < 0)
@@ -933,8 +933,8 @@ otx2_eth_sec_fini(struct rte_eth_dev *eth_dev)
uint16_t port = eth_dev->data->port_id;
char name[RTE_MEMZONE_NAMESIZE];
- if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+ if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+ !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
return;
lookup_mem_sa_tbl_clear(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 6df0732189eb..1d0fe4e950d4 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -625,7 +625,7 @@ otx2_flow_create(struct rte_eth_dev *dev,
goto err_exit;
}
- if (hw->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (hw->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
rc = flow_update_sec_tt(dev, actions);
if (rc != 0) {
rte_flow_error_set(error, EIO,
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
index 76bf48100183..071740de86a7 100644
--- a/drivers/net/octeontx2/otx2_flow_ctrl.c
+++ b/drivers/net/octeontx2/otx2_flow_ctrl.c
@@ -54,7 +54,7 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
int rc;
if (otx2_dev_is_lbk(dev)) {
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -66,13 +66,13 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
goto done;
if (rsp->rx_pause && rsp->tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rsp->rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (rsp->tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
done:
return rc;
@@ -159,10 +159,10 @@ otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
if (fc_conf->mode == fc->mode)
return 0;
- rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
- (fc_conf->mode == RTE_FC_TX_PAUSE);
+ rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+ (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
/* Check if TX pause frame is already enabled or not */
if (fc->tx_pause ^ tx_pause) {
@@ -212,11 +212,11 @@ otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
if (otx2_dev_is_Ax(dev) &&
(dev->npc_flow.switch_header_type != OTX2_PRIV_FLAGS_HIGIG) &&
- (fc_conf.mode == RTE_FC_FULL || fc_conf.mode == RTE_FC_RX_PAUSE)) {
+ (fc_conf.mode == RTE_ETH_FC_FULL || fc_conf.mode == RTE_ETH_FC_RX_PAUSE)) {
fc_conf.mode =
- (fc_conf.mode == RTE_FC_FULL ||
- fc_conf.mode == RTE_FC_TX_PAUSE) ?
- RTE_FC_TX_PAUSE : RTE_FC_NONE;
+ (fc_conf.mode == RTE_ETH_FC_FULL ||
+ fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ?
+ RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
}
return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
@@ -234,7 +234,7 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
return 0;
memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
- /* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+ /* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
* by AF driver, update those info in PMD structure.
*/
rc = otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -242,10 +242,10 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
goto exit;
fc->mode = fc_conf.mode;
- fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_RX_PAUSE);
- fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
- (fc_conf.mode == RTE_FC_TX_PAUSE);
+ fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+ fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+ (fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
exit:
return rc;
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index 63a33142a579..3fe6727f1d2a 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -852,7 +852,7 @@ parse_rss_action(struct rte_eth_dev *dev,
attr, "No support of RSS in egress");
}
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS)
+ if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION,
act, "multi-queue mode is disabled");
@@ -1188,7 +1188,7 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
*FLOW_KEY_ALG index. So, till we update the action with
*flow_key_alg index, set the action to drop.
*/
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
flow->npc_action = NIX_RX_ACTIONOP_DROP;
else
flow->npc_action = NIX_RX_ACTIONOP_UCAST;
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
index 81dd6243b977..8f5d0eed92b6 100644
--- a/drivers/net/octeontx2/otx2_link.c
+++ b/drivers/net/octeontx2/otx2_link.c
@@ -41,7 +41,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
otx2_info("Port %d: Link Up - speed %u Mbps - %s",
(int)(eth_dev->data->port_id),
(uint32_t)link->link_speed,
- link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
else
otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
@@ -92,7 +92,7 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
eth_link.link_status = link->link_up;
eth_link.link_speed = link->speed;
- eth_link.link_autoneg = ETH_LINK_AUTONEG;
+ eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
eth_link.link_duplex = link->full_duplex;
otx2_dev->speed = link->speed;
@@ -111,10 +111,10 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
static int
lbk_link_update(struct rte_eth_link *link)
{
- link->link_status = ETH_LINK_UP;
- link->link_speed = ETH_SPEED_NUM_100G;
- link->link_autoneg = ETH_LINK_FIXED;
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_status = RTE_ETH_LINK_UP;
+ link->link_speed = RTE_ETH_SPEED_NUM_100G;
+ link->link_autoneg = RTE_ETH_LINK_FIXED;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return 0;
}
@@ -131,7 +131,7 @@ cgx_link_update(struct otx2_eth_dev *dev, struct rte_eth_link *link)
link->link_status = rsp->link_info.link_up;
link->link_speed = rsp->link_info.speed;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
if (rsp->link_info.full_duplex)
link->link_duplex = rsp->link_info.full_duplex;
@@ -233,22 +233,22 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
/* 50G and 100G to be supported for board version C0 and above */
if (!otx2_dev_is_Ax(dev)) {
- if (link_speeds & ETH_LINK_SPEED_100G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_100G)
link_speed = 100000;
- if (link_speeds & ETH_LINK_SPEED_50G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_50G)
link_speed = 50000;
}
- if (link_speeds & ETH_LINK_SPEED_40G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_40G)
link_speed = 40000;
- if (link_speeds & ETH_LINK_SPEED_25G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_25G)
link_speed = 25000;
- if (link_speeds & ETH_LINK_SPEED_20G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_20G)
link_speed = 20000;
- if (link_speeds & ETH_LINK_SPEED_10G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_10G)
link_speed = 10000;
- if (link_speeds & ETH_LINK_SPEED_5G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_5G)
link_speed = 5000;
- if (link_speeds & ETH_LINK_SPEED_1G)
+ if (link_speeds & RTE_ETH_LINK_SPEED_1G)
link_speed = 1000;
return link_speed;
@@ -257,11 +257,11 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
static inline uint8_t
nix_parse_eth_link_duplex(uint32_t link_speeds)
{
- if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
- (link_speeds & ETH_LINK_SPEED_100M_HD))
- return ETH_LINK_HALF_DUPLEX;
+ if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+ (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+ return RTE_ETH_LINK_HALF_DUPLEX;
else
- return ETH_LINK_FULL_DUPLEX;
+ return RTE_ETH_LINK_FULL_DUPLEX;
}
int
@@ -279,7 +279,7 @@ otx2_apply_link_speed(struct rte_eth_dev *eth_dev)
cfg.speed = nix_parse_link_speeds(dev, conf->link_speeds);
if (cfg.speed != SPEED_NONE && cfg.speed != dev->speed) {
cfg.duplex = nix_parse_eth_link_duplex(conf->link_speeds);
- cfg.an = (conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+ cfg.an = (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
return cgx_change_mode(dev, &cfg);
}
diff --git a/drivers/net/octeontx2/otx2_mcast.c b/drivers/net/octeontx2/otx2_mcast.c
index f84aa1bf570c..b9c63ad3bc21 100644
--- a/drivers/net/octeontx2/otx2_mcast.c
+++ b/drivers/net/octeontx2/otx2_mcast.c
@@ -100,7 +100,7 @@ nix_hw_update_mc_addr_list(struct rte_eth_dev *eth_dev)
action = NIX_RX_ACTIONOP_UCAST;
- if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
action = NIX_RX_ACTIONOP_RSS;
action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
}
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
index 91e5c0f6bd11..abb213058792 100644
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -250,7 +250,7 @@ otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
/* System time should be already on by default */
nix_start_timecounters(eth_dev);
- dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
@@ -287,7 +287,7 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
return -EINVAL;
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
--git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
index 7dbe5f69ae65..cbc6d67a7fcf 100644
--- a/drivers/net/octeontx2/otx2_rss.c
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -178,23 +178,23 @@ rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
}
#define RSS_IPV4_ENABLE ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_SCTP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
#define RSS_IPV6_ENABLE ( \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
#define RSS_IPV6_EX_ENABLE ( \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define RSS_MAX_LEVELS 3
@@ -233,24 +233,24 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
dev->rss_info.nix_rss = ethdev_rss;
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
}
- if (ethdev_rss & ETH_RSS_C_VLAN)
+ if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
- if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
- if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
- if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
- if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+ if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -259,34 +259,34 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
if (ethdev_rss & RSS_IPV6_ENABLE)
flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
- if (ethdev_rss & ETH_RSS_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_TCP)
flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
- if (ethdev_rss & ETH_RSS_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_UDP)
flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
- if (ethdev_rss & ETH_RSS_SCTP)
+ if (ethdev_rss & RTE_ETH_RSS_SCTP)
flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
- if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+ if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
if (ethdev_rss & RSS_IPV6_EX_ENABLE)
flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
- if (ethdev_rss & ETH_RSS_PORT)
+ if (ethdev_rss & RTE_ETH_RSS_PORT)
flowkey_cfg |= FLOW_KEY_TYPE_PORT;
- if (ethdev_rss & ETH_RSS_NVGRE)
+ if (ethdev_rss & RTE_ETH_RSS_NVGRE)
flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
- if (ethdev_rss & ETH_RSS_VXLAN)
+ if (ethdev_rss & RTE_ETH_RSS_VXLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
- if (ethdev_rss & ETH_RSS_GENEVE)
+ if (ethdev_rss & RTE_ETH_RSS_GENEVE)
flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
- if (ethdev_rss & ETH_RSS_GTPU)
+ if (ethdev_rss & RTE_ETH_RSS_GTPU)
flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
return flowkey_cfg;
@@ -343,7 +343,7 @@ otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
otx2_nix_rss_set_key(dev, rss_conf->rss_key,
(uint32_t)rss_conf->rss_key_len);
- rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
flowkey_cfg =
@@ -390,7 +390,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
int rc;
/* Skip further configuration if selected mode is not RSS */
- if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS || !qcnt)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS || !qcnt)
return 0;
/* Update default RSS key and cfg */
@@ -408,7 +408,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
}
rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+ rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
if (rss_hash_level)
rss_hash_level -= 1;
flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, rss_hash_level);
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index ffeade5952dc..986902287b67 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -414,12 +414,12 @@ NIX_RX_FASTPATH_MODES
/* For PTP enabled, scalar rx function should be chosen as most of the
* PTP apps are implemented to rx burst 1 pkt.
*/
- if (dev->scalar_ena || dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ if (dev->scalar_ena || dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
pick_rx_func(eth_dev, nix_eth_rx_burst);
else
pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
- if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
/* Copy multi seg version with no offload for tear down sequence */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ff299f00b913..c60190074926 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -1070,7 +1070,7 @@ NIX_TX_FASTPATH_MODES
else
pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
- if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
rte_mb();
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index f5161e17a16d..cce643b7b51d 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -50,7 +50,7 @@ nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
action = NIX_RX_ACTIONOP_UCAST;
- if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
action = NIX_RX_ACTIONOP_RSS;
action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
}
@@ -99,7 +99,7 @@ nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
* Take offset from LA since in case of untagged packet,
* lbptr is zero.
*/
- if (type == ETH_VLAN_TYPE_OUTER) {
+ if (type == RTE_ETH_VLAN_TYPE_OUTER) {
vtag_action.act.vtag0_def = vtag_index;
vtag_action.act.vtag0_lid = NPC_LID_LA;
vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
@@ -413,7 +413,7 @@ nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
if (vlan->strip_on ||
(vlan->qinq_on && !vlan->qinq_before_def)) {
if (eth_dev->data->dev_conf.rxmode.mq_mode ==
- ETH_MQ_RX_RSS)
+ RTE_ETH_MQ_RX_RSS)
vlan->def_rx_mcam_ent.action |=
NIX_RX_ACTIONOP_RSS;
else
@@ -717,48 +717,48 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
rxmode = ð_dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
- offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
rc = nix_vlan_hw_strip(eth_dev, true);
} else {
- offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
rc = nix_vlan_hw_strip(eth_dev, false);
}
if (rc)
goto done;
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
- offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
+ offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
rc = nix_vlan_hw_filter(eth_dev, true, 0);
} else {
- offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
rc = nix_vlan_hw_filter(eth_dev, false, 0);
}
if (rc)
goto done;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
if (!dev->vlan_info.qinq_on) {
- offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
rc = otx2_nix_config_double_vlan(eth_dev, true);
if (rc)
goto done;
}
} else {
if (dev->vlan_info.qinq_on) {
- offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+ offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
rc = otx2_nix_config_double_vlan(eth_dev, false);
if (rc)
goto done;
}
}
- if (offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP)) {
+ if (offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP)) {
dev->rx_offloads |= offloads;
dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
otx2_eth_set_rx_function(eth_dev);
@@ -780,7 +780,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
tpid_cfg->tpid = tpid;
- if (type == ETH_VLAN_TYPE_OUTER)
+ if (type == RTE_ETH_VLAN_TYPE_OUTER)
tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
else
tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
@@ -789,7 +789,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
if (rc)
return rc;
- if (type == ETH_VLAN_TYPE_OUTER)
+ if (type == RTE_ETH_VLAN_TYPE_OUTER)
dev->vlan_info.outer_vlan_tpid = tpid;
else
dev->vlan_info.inner_vlan_tpid = tpid;
@@ -864,7 +864,7 @@ otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
vlan->outer_vlan_idx = 0;
}
- rc = nix_vlan_handle_default_tx_entry(dev, ETH_VLAN_TYPE_OUTER,
+ rc = nix_vlan_handle_default_tx_entry(dev, RTE_ETH_VLAN_TYPE_OUTER,
vtag_index, on);
if (rc < 0) {
printf("Default tx entry failed with rc %d\n", rc);
@@ -986,12 +986,12 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
} else {
/* Reinstall all mcam entries now if filter offload is set */
if (eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
nix_vlan_reinstall_vlan_filters(eth_dev);
}
mask =
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
rc = otx2_nix_vlan_offload_set(eth_dev, mask);
if (rc) {
otx2_err("Failed to set vlan offload rc=%d", rc);
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index a243683d61d3..7bfa6098e230 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -33,15 +33,15 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
otx_epvf = OTX_EP_DEV(eth_dev);
- devinfo->speed_capa = ETH_LINK_SPEED_10G;
+ devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
devinfo->max_rx_queues = otx_epvf->max_rx_queues;
devinfo->max_tx_queues = otx_epvf->max_tx_queues;
devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
- devinfo->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
- devinfo->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
- devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+ devinfo->rx_offload_capa = RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+ devinfo->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SCATTER;
+ devinfo->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index a7d433547e36..77593111f141 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -563,7 +563,7 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
struct otx_ep_buf_free_info *finfo;
int j, frags, num_sg;
- if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+ if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
goto xmit_fail;
finfo = (struct otx_ep_buf_free_info *)rte_malloc(NULL,
@@ -697,7 +697,7 @@ otx2_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
struct otx_ep_buf_free_info *finfo;
int j, frags, num_sg;
- if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+ if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
goto xmit_fail;
finfo = (struct otx_ep_buf_free_info *)
@@ -954,13 +954,13 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
droq_pkt->l4_len = hdr_lens.l4_len;
if ((droq_pkt->pkt_len > (RTE_ETHER_MAX_LEN + OTX_CUST_DATA_LEN)) &&
- !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)) {
+ !(otx_ep->rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)) {
rte_pktmbuf_free(droq_pkt);
goto oq_read_fail;
}
if (droq_pkt->nb_segs > 1 &&
- !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+ !(otx_ep->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
rte_pktmbuf_free(droq_pkt);
goto oq_read_fail;
}
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index a8774b7a432a..13d18e875444 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -135,10 +135,10 @@ static const char *valid_arguments[] = {
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(eth_pcap_logtype, NOTICE);
@@ -655,7 +655,7 @@ eth_dev_start(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++)
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -710,7 +710,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++)
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index feec4d10a26e..a74f27bf8158 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -22,15 +22,15 @@ struct pfe_vdev_init_params {
static struct pfe *g_pfe;
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
/* Supported Tx offloads */
static uint64_t dev_tx_offloads_sup =
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
/* TODO: make pfe_svr a runtime option.
* Driver should be able to get the SVR
@@ -613,9 +613,9 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
}
link.link_status = lstatus;
- link.link_speed = ETH_LINK_SPEED_1G;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_speed = RTE_ETH_LINK_SPEED_1G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
pfe_eth_atomic_write_link_status(dev, &link);
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 6667c2d7ab6d..511742c6a1b3 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -65,8 +65,8 @@ typedef u32 offsize_t; /* In DWORDS !!! */
struct eth_phy_cfg {
/* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */
u32 speed;
-#define ETH_SPEED_AUTONEG 0
-#define ETH_SPEED_SMARTLINQ 0x8 /* deprecated - use link_modes field instead */
+#define RTE_ETH_SPEED_AUTONEG 0
+#define RTE_ETH_SPEED_SMARTLINQ 0x8 /* deprecated - use link_modes field instead */
u32 pause; /* bitmask */
#define ETH_PAUSE_NONE 0x0
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 323d46e6ebb2..0af2f919e9d5 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -342,9 +342,9 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev, bool is_dummy)
}
use_tx_offload = !!(tx_offloads &
- (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
- DEV_TX_OFFLOAD_TCP_TSO | /* tso */
- DEV_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
+ (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | /* tso */
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
if (use_tx_offload) {
DP_INFO(edev, "Assigning qede_xmit_pkts\n");
@@ -1002,16 +1002,16 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
(void)qede_vlan_stripping(eth_dev, 1);
else
(void)qede_vlan_stripping(eth_dev, 0);
}
- if (mask & ETH_VLAN_FILTER_MASK) {
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
/* VLAN filtering kicks in when a VLAN is added */
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
qede_vlan_filter_set(eth_dev, 0, 1);
} else {
if (qdev->configured_vlans > 1) { /* Excluding VLAN0 */
@@ -1022,7 +1022,7 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
* enabled
*/
eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_VLAN_FILTER;
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
} else {
qede_vlan_filter_set(eth_dev, 0, 0);
}
@@ -1112,12 +1112,12 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
}
/* Configure TPA parameters */
- if (rxmode->offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
if (qede_enable_tpa(eth_dev, true))
return -EINVAL;
/* Enable scatter mode for LRO */
if (!eth_dev->data->scattered_rx)
- rxmode->offloads |= DEV_RX_OFFLOAD_SCATTER;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
}
/* Start queues */
@@ -1132,7 +1132,7 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
* Also, we would like to retain similar behavior in PF case, so we
* don't do PF/VF specific check here.
*/
- if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
if (qede_config_rss(eth_dev))
goto err;
@@ -1272,8 +1272,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
PMD_INIT_FUNC_TRACE(edev);
- if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
- rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* We need to have min 1 RX queue.There is no min check in
* rte_eth_dev_configure(), so we are checking it here.
@@ -1291,8 +1291,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
DP_NOTICE(edev, false,
"Invalid devargs supplied, requested change will not take effect\n");
- if (!(rxmode->mq_mode == ETH_MQ_RX_NONE ||
- rxmode->mq_mode == ETH_MQ_RX_RSS)) {
+ if (!(rxmode->mq_mode == RTE_ETH_MQ_RX_NONE ||
+ rxmode->mq_mode == RTE_ETH_MQ_RX_RSS)) {
DP_ERR(edev, "Unsupported multi-queue mode\n");
return -ENOTSUP;
}
@@ -1313,12 +1313,12 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
}
/* If jumbo enabled adjust MTU */
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
eth_dev->data->mtu =
eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
RTE_ETHER_HDR_LEN - QEDE_ETH_OVERHEAD;
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
eth_dev->data->scattered_rx = 1;
if (qede_start_vport(qdev, eth_dev->data->mtu))
@@ -1327,8 +1327,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
qdev->mtu = eth_dev->data->mtu;
/* Enable VLAN offloads by default */
- ret = qede_vlan_offload_set(eth_dev, ETH_VLAN_STRIP_MASK |
- ETH_VLAN_FILTER_MASK);
+ ret = qede_vlan_offload_set(eth_dev, RTE_ETH_VLAN_STRIP_MASK |
+ RTE_ETH_VLAN_FILTER_MASK);
if (ret)
return ret;
@@ -1391,35 +1391,35 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
dev_info->reta_size = ECORE_RSS_IND_TABLE_SIZE;
dev_info->hash_key_size = ECORE_RSS_KEY_SIZE * sizeof(uint32_t);
dev_info->flow_type_rss_offloads = (uint64_t)QEDE_RSS_OFFLOAD_ALL;
- dev_info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_RSS_HASH);
+ dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
dev_info->rx_queue_offload_capa = 0;
/* TX offloads are on a per-packet basis, so it is applicable
* to both at port and queue levels.
*/
- dev_info->tx_offload_capa = (DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO);
+ dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO);
dev_info->tx_queue_offload_capa = dev_info->tx_offload_capa;
dev_info->default_txconf = (struct rte_eth_txconf) {
- .offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+ .offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
};
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1431,17 +1431,17 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
memset(&link, 0, sizeof(struct qed_link_output));
qdev->ops->common->get_link(edev, &link);
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G)
- speed_cap |= ETH_LINK_SPEED_1G;
+ speed_cap |= RTE_ETH_LINK_SPEED_1G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G)
- speed_cap |= ETH_LINK_SPEED_10G;
+ speed_cap |= RTE_ETH_LINK_SPEED_10G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G)
- speed_cap |= ETH_LINK_SPEED_25G;
+ speed_cap |= RTE_ETH_LINK_SPEED_25G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G)
- speed_cap |= ETH_LINK_SPEED_40G;
+ speed_cap |= RTE_ETH_LINK_SPEED_40G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G)
- speed_cap |= ETH_LINK_SPEED_50G;
+ speed_cap |= RTE_ETH_LINK_SPEED_50G;
if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G)
- speed_cap |= ETH_LINK_SPEED_100G;
+ speed_cap |= RTE_ETH_LINK_SPEED_100G;
dev_info->speed_capa = speed_cap;
return 0;
@@ -1468,10 +1468,10 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
/* Link Mode */
switch (q_link.duplex) {
case QEDE_DUPLEX_HALF:
- link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case QEDE_DUPLEX_FULL:
- link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case QEDE_DUPLEX_UNKNOWN:
default:
@@ -1480,11 +1480,11 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
link.link_duplex = link_duplex;
/* Link Status */
- link.link_status = q_link.link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ link.link_status = q_link.link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
/* AN */
link.link_autoneg = (q_link.supported_caps & QEDE_SUPPORTED_AUTONEG) ?
- ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+ RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
DP_INFO(edev, "Link - Speed %u Mode %u AN %u Status %u\n",
link.link_speed, link.link_duplex,
@@ -2019,12 +2019,12 @@ static int qede_flow_ctrl_set(struct rte_eth_dev *eth_dev,
}
/* Pause is assumed to be supported (SUPPORTED_Pause) */
- if (fc_conf->mode == RTE_FC_FULL)
+ if (fc_conf->mode == RTE_ETH_FC_FULL)
params.pause_config |= (QED_LINK_PAUSE_TX_ENABLE |
QED_LINK_PAUSE_RX_ENABLE);
- if (fc_conf->mode == RTE_FC_TX_PAUSE)
+ if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
params.pause_config |= QED_LINK_PAUSE_TX_ENABLE;
- if (fc_conf->mode == RTE_FC_RX_PAUSE)
+ if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
params.pause_config |= QED_LINK_PAUSE_RX_ENABLE;
params.link_up = true;
@@ -2048,13 +2048,13 @@ static int qede_flow_ctrl_get(struct rte_eth_dev *eth_dev,
if (current_link.pause_config & (QED_LINK_PAUSE_RX_ENABLE |
QED_LINK_PAUSE_TX_ENABLE))
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (current_link.pause_config & QED_LINK_PAUSE_RX_ENABLE)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (current_link.pause_config & QED_LINK_PAUSE_TX_ENABLE)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -2095,14 +2095,14 @@ qede_dev_supported_ptypes_get(struct rte_eth_dev *eth_dev)
static void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf)
{
*rss_caps = 0;
- *rss_caps |= (hf & ETH_RSS_IPV4) ? ECORE_RSS_IPV4 : 0;
- *rss_caps |= (hf & ETH_RSS_IPV6) ? ECORE_RSS_IPV6 : 0;
- *rss_caps |= (hf & ETH_RSS_IPV6_EX) ? ECORE_RSS_IPV6 : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? ECORE_RSS_IPV4_TCP : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? ECORE_RSS_IPV6_TCP : 0;
- *rss_caps |= (hf & ETH_RSS_IPV6_TCP_EX) ? ECORE_RSS_IPV6_TCP : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_UDP) ? ECORE_RSS_IPV4_UDP : 0;
- *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_UDP) ? ECORE_RSS_IPV6_UDP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV4) ? ECORE_RSS_IPV4 : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV6) ? ECORE_RSS_IPV6 : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV6_EX) ? ECORE_RSS_IPV6 : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? ECORE_RSS_IPV4_TCP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? ECORE_RSS_IPV6_TCP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_IPV6_TCP_EX) ? ECORE_RSS_IPV6_TCP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? ECORE_RSS_IPV4_UDP : 0;
+ *rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? ECORE_RSS_IPV6_UDP : 0;
}
int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
@@ -2228,7 +2228,7 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
uint8_t entry;
int rc = 0;
- if (reta_size > ETH_RSS_RETA_SIZE_128) {
+ if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
DP_ERR(edev, "reta_size %d is not supported by hardware\n",
reta_size);
return -EINVAL;
@@ -2289,7 +2289,7 @@ static int qede_rss_reta_query(struct rte_eth_dev *eth_dev,
uint16_t i, idx, shift;
uint8_t entry;
- if (reta_size > ETH_RSS_RETA_SIZE_128) {
+ if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
DP_ERR(edev, "reta_size %d is not supported\n",
reta_size);
return -EINVAL;
@@ -2369,9 +2369,9 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
}
}
if (frame_size > QEDE_ETH_MAX_LEN)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
if (!dev->data->dev_started && restart) {
qede_dev_start(dev);
diff --git a/drivers/net/qede/qede_filter.c b/drivers/net/qede/qede_filter.c
index c756594bfc4b..ceb47c17d0d6 100644
--- a/drivers/net/qede/qede_filter.c
+++ b/drivers/net/qede/qede_filter.c
@@ -144,7 +144,7 @@ int qede_check_fdir_support(struct rte_eth_dev *eth_dev)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
- struct rte_fdir_conf *fdir = ð_dev->data->dev_conf.fdir_conf;
+ struct rte_eth_fdir_conf *fdir = ð_dev->data->dev_conf.fdir_conf;
/* check FDIR modes */
switch (fdir->mode) {
@@ -542,7 +542,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
memset(&tunn, 0, sizeof(tunn));
switch (tunnel_udp->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (qdev->vxlan.udp_port != tunnel_udp->udp_port) {
DP_ERR(edev, "UDP port %u doesn't exist\n",
tunnel_udp->udp_port);
@@ -570,7 +570,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
ECORE_TUNN_CLSS_MAC_VLAN, false);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (qdev->geneve.udp_port != tunnel_udp->udp_port) {
DP_ERR(edev, "UDP port %u doesn't exist\n",
tunnel_udp->udp_port);
@@ -622,7 +622,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
memset(&tunn, 0, sizeof(tunn));
switch (tunnel_udp->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (qdev->vxlan.udp_port == tunnel_udp->udp_port) {
DP_INFO(edev,
"UDP port %u for VXLAN was already configured\n",
@@ -659,7 +659,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
qdev->vxlan.udp_port = udp_port;
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (qdev->geneve.udp_port == tunnel_udp->udp_port) {
DP_INFO(edev,
"UDP port %u for GENEVE was already configured\n",
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 298f4e3e4273..144dfef269f3 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -249,7 +249,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
/* cache align the mbuf size to simplfy rx_buf_size calculation */
bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
- if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) ||
+ if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
(max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) {
if (!dev->data->scattered_rx) {
DP_INFO(edev, "Forcing scatter-gather mode\n");
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index c9334448c887..15112b83f4f7 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -73,14 +73,14 @@
#define QEDE_MAX_ETHER_HDR_LEN (RTE_ETHER_HDR_LEN + QEDE_ETH_OVERHEAD)
#define QEDE_ETH_MAX_LEN (RTE_ETHER_MTU + QEDE_MAX_ETHER_HDR_LEN)
-#define QEDE_RSS_OFFLOAD_ALL (ETH_RSS_IPV4 |\
- ETH_RSS_NONFRAG_IPV4_TCP |\
- ETH_RSS_NONFRAG_IPV4_UDP |\
- ETH_RSS_IPV6 |\
- ETH_RSS_NONFRAG_IPV6_TCP |\
- ETH_RSS_NONFRAG_IPV6_UDP |\
- ETH_RSS_VXLAN |\
- ETH_RSS_GENEVE)
+#define QEDE_RSS_OFFLOAD_ALL (RTE_ETH_RSS_IPV4 |\
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP |\
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP |\
+ RTE_ETH_RSS_IPV6 |\
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP |\
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP |\
+ RTE_ETH_RSS_VXLAN |\
+ RTE_ETH_RSS_GENEVE)
#define QEDE_RXTX_MAX(qdev) \
(RTE_MAX(qdev->num_rx_queues, qdev->num_tx_queues))
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 1faf38a714cf..8d1ef5fb22bc 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -56,10 +56,10 @@ struct pmd_internals {
};
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
RTE_LOG_REGISTER_DEFAULT(eth_ring_logtype, NOTICE);
@@ -102,7 +102,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused) { return 0; }
static int
eth_dev_start(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -110,21 +110,21 @@ static int
eth_dev_stop(struct rte_eth_dev *dev)
{
dev->data->dev_started = 0;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
static int
eth_dev_set_link_down(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
static int
eth_dev_set_link_up(struct rte_eth_dev *dev)
{
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -163,8 +163,8 @@ eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_mac_addrs = 1;
dev_info->max_rx_pktlen = (uint32_t)-1;
dev_info->max_rx_queues = (uint16_t)internals->max_rx_queues;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->max_tx_queues = (uint16_t)internals->max_tx_queues;
dev_info->min_rx_bufsize = 0;
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 274a98e228e4..d93f9d2418b9 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -81,13 +81,13 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
{
uint32_t phy_caps = 0;
- if (~speeds & ETH_LINK_SPEED_FIXED) {
+ if (~speeds & RTE_ETH_LINK_SPEED_FIXED) {
phy_caps |= (1 << EFX_PHY_CAP_AN);
/*
* If no speeds are specified in the mask, any supported
* may be negotiated
*/
- if (speeds == ETH_LINK_SPEED_AUTONEG)
+ if (speeds == RTE_ETH_LINK_SPEED_AUTONEG)
phy_caps |=
(1 << EFX_PHY_CAP_1000FDX) |
(1 << EFX_PHY_CAP_10000FDX) |
@@ -96,17 +96,17 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
(1 << EFX_PHY_CAP_50000FDX) |
(1 << EFX_PHY_CAP_100000FDX);
}
- if (speeds & ETH_LINK_SPEED_1G)
+ if (speeds & RTE_ETH_LINK_SPEED_1G)
phy_caps |= (1 << EFX_PHY_CAP_1000FDX);
- if (speeds & ETH_LINK_SPEED_10G)
+ if (speeds & RTE_ETH_LINK_SPEED_10G)
phy_caps |= (1 << EFX_PHY_CAP_10000FDX);
- if (speeds & ETH_LINK_SPEED_25G)
+ if (speeds & RTE_ETH_LINK_SPEED_25G)
phy_caps |= (1 << EFX_PHY_CAP_25000FDX);
- if (speeds & ETH_LINK_SPEED_40G)
+ if (speeds & RTE_ETH_LINK_SPEED_40G)
phy_caps |= (1 << EFX_PHY_CAP_40000FDX);
- if (speeds & ETH_LINK_SPEED_50G)
+ if (speeds & RTE_ETH_LINK_SPEED_50G)
phy_caps |= (1 << EFX_PHY_CAP_50000FDX);
- if (speeds & ETH_LINK_SPEED_100G)
+ if (speeds & RTE_ETH_LINK_SPEED_100G)
phy_caps |= (1 << EFX_PHY_CAP_100000FDX);
return phy_caps;
@@ -337,10 +337,10 @@ sfc_set_fw_subvariant(struct sfc_adapter *sa)
tx_offloads |= txq_info->offloads;
}
- if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_DEFAULT;
else
req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_NO_TX_CSUM;
@@ -827,7 +827,7 @@ sfc_attach(struct sfc_adapter *sa)
sa->priv.shared->tunnel_encaps =
encp->enc_tunnel_encapsulations_supported;
- if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & DEV_TX_OFFLOAD_TCP_TSO) {
+ if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
sa->tso = encp->enc_fw_assisted_tso_v2_enabled ||
encp->enc_tso_v3_enabled;
if (!sa->tso)
@@ -836,8 +836,8 @@ sfc_attach(struct sfc_adapter *sa)
if (sa->tso &&
(sfc_dp_tx_offload_capa(sa->priv.dp_tx) &
- (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
+ (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
sa->tso_encap = encp->enc_fw_assisted_tso_v2_encap_enabled ||
encp->enc_tso_v3_enabled;
if (!sa->tso_encap)
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index d4cb96881cd2..ca8774ad0950 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -916,11 +916,11 @@ struct sfc_dp_rx sfc_ef100_rx = {
.features = SFC_DP_RX_FEAT_MULTI_PROCESS |
SFC_DP_RX_FEAT_INTR,
.dev_offload_capa = 0,
- .queue_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_RSS_HASH,
+ .queue_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
.get_dev_info = sfc_ef100_rx_get_dev_info,
.qsize_up_rings = sfc_ef100_rx_qsize_up_rings,
.qcreate = sfc_ef100_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 522e9a0d3470..7c91ee3fcb53 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -942,16 +942,16 @@ struct sfc_dp_tx sfc_ef100_tx = {
},
.features = SFC_DP_TX_FEAT_MULTI_PROCESS,
.dev_offload_capa = 0,
- .queue_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
.get_dev_info = sfc_ef100_get_dev_info,
.qsize_up_rings = sfc_ef100_tx_qsize_up_rings,
.qcreate = sfc_ef100_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 991329e86f01..9ea207cca163 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -746,8 +746,8 @@ struct sfc_dp_rx sfc_ef10_essb_rx = {
},
.features = SFC_DP_RX_FEAT_FLOW_FLAG |
SFC_DP_RX_FEAT_FLOW_MARK,
- .dev_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_RSS_HASH,
+ .dev_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
.queue_offload_capa = 0,
.get_dev_info = sfc_ef10_essb_rx_get_dev_info,
.pool_ops_supported = sfc_ef10_essb_rx_pool_ops_supported,
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 49a7d4fb42fd..9aaabd30eee6 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -819,10 +819,10 @@ struct sfc_dp_rx sfc_ef10_rx = {
},
.features = SFC_DP_RX_FEAT_MULTI_PROCESS |
SFC_DP_RX_FEAT_INTR,
- .dev_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_RX_OFFLOAD_RSS_HASH,
- .queue_offload_capa = DEV_RX_OFFLOAD_SCATTER,
+ .dev_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
+ .queue_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER,
.get_dev_info = sfc_ef10_rx_get_dev_info,
.qsize_up_rings = sfc_ef10_rx_qsize_up_rings,
.qcreate = sfc_ef10_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index ed43adb4ca5c..e7da4608bcb0 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -958,9 +958,9 @@ sfc_ef10_tx_qcreate(uint16_t port_id, uint16_t queue_id,
if (txq->sw_ring == NULL)
goto fail_sw_ring_alloc;
- if (info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) {
+ if (info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) {
txq->tsoh = rte_calloc_socket("sfc-ef10-txq-tsoh",
info->txq_entries,
SFC_TSOH_STD_LEN,
@@ -1125,14 +1125,14 @@ struct sfc_dp_tx sfc_ef10_tx = {
.hw_fw_caps = SFC_DP_HW_FW_CAP_EF10,
},
.features = SFC_DP_TX_FEAT_MULTI_PROCESS,
- .dev_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS,
- .queue_offload_capa = DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+ .dev_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
.get_dev_info = sfc_ef10_get_dev_info,
.qsize_up_rings = sfc_ef10_tx_qsize_up_rings,
.qcreate = sfc_ef10_tx_qcreate,
@@ -1152,11 +1152,11 @@ struct sfc_dp_tx sfc_ef10_simple_tx = {
.type = SFC_DP_TX,
},
.features = SFC_DP_TX_FEAT_MULTI_PROCESS,
- .dev_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
- .queue_offload_capa = DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM,
+ .dev_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM,
.get_dev_info = sfc_ef10_get_dev_info,
.qsize_up_rings = sfc_ef10_tx_qsize_up_rings,
.qcreate = sfc_ef10_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3ad..8734bca4876f 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -102,19 +102,19 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_vfs = sa->sriov.num_vfs;
/* Autonegotiation may be disabled */
- dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_1000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_1G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_1G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_10000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_10G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_25000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_25G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_40000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_50000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_100000FDX))
- dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
dev_info->max_rx_queues = sa->rxq_max;
dev_info->max_tx_queues = sa->txq_max;
@@ -142,8 +142,8 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->tx_offload_capa = sfc_tx_get_dev_offload_caps(sa) |
dev_info->tx_queue_offload_capa;
- if (dev_info->tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
- txq_offloads_def |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ if (dev_info->tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+ txq_offloads_def |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->default_txconf.offloads |= txq_offloads_def;
@@ -912,16 +912,16 @@ sfc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
switch (link_fc) {
case 0:
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
break;
case EFX_FCNTL_RESPOND:
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
break;
case EFX_FCNTL_GENERATE:
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
break;
case (EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE):
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
break;
default:
sfc_err(sa, "%s: unexpected flow control value %#x",
@@ -952,16 +952,16 @@ sfc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
}
switch (fc_conf->mode) {
- case RTE_FC_NONE:
+ case RTE_ETH_FC_NONE:
fcntl = 0;
break;
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
fcntl = EFX_FCNTL_RESPOND;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
fcntl = EFX_FCNTL_GENERATE;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
fcntl = EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE;
break;
default:
@@ -1070,7 +1070,7 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
*/
if (mtu > RTE_ETHER_MTU) {
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
}
dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu;
@@ -1247,7 +1247,7 @@ sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
qinfo->conf.rx_deferred_start = rxq_info->deferred_start;
qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads;
if (rxq_info->type_flags & EFX_RXQ_FLAG_SCATTER) {
- qinfo->conf.offloads |= DEV_RX_OFFLOAD_SCATTER;
+ qinfo->conf.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
qinfo->scattered_rx = 1;
}
qinfo->nb_desc = rxq_info->entries;
@@ -1472,9 +1472,9 @@ static efx_tunnel_protocol_t
sfc_tunnel_rte_type_to_efx_udp_proto(enum rte_eth_tunnel_type rte_type)
{
switch (rte_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
return EFX_TUNNEL_PROTOCOL_VXLAN;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
return EFX_TUNNEL_PROTOCOL_GENEVE;
default:
return EFX_TUNNEL_NPROTOS;
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 4f5993a68d23..dc2cdfea13c4 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -390,7 +390,7 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
const struct rte_flow_item_vlan *spec = NULL;
const struct rte_flow_item_vlan *mask = NULL;
const struct rte_flow_item_vlan supp_mask = {
- .tci = rte_cpu_to_be_16(ETH_VLAN_ID_MAX),
+ .tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
.inner_type = RTE_BE16(0xffff),
};
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index adb2b2cb8175..dea5272a79bc 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -387,7 +387,7 @@ sfc_port_configure(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
port->pdu = rxmode->max_rx_pkt_len;
else
port->pdu = EFX_MAC_PDU(dev_data->mtu);
@@ -577,66 +577,66 @@ sfc_port_link_mode_to_info(efx_link_mode_t link_mode,
memset(link_info, 0, sizeof(*link_info));
if ((link_mode == EFX_LINK_DOWN) || (link_mode == EFX_LINK_UNKNOWN))
- link_info->link_status = ETH_LINK_DOWN;
+ link_info->link_status = RTE_ETH_LINK_DOWN;
else
- link_info->link_status = ETH_LINK_UP;
+ link_info->link_status = RTE_ETH_LINK_UP;
switch (link_mode) {
case EFX_LINK_10HDX:
- link_info->link_speed = ETH_SPEED_NUM_10M;
- link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_10M;
+ link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case EFX_LINK_10FDX:
- link_info->link_speed = ETH_SPEED_NUM_10M;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_10M;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_100HDX:
- link_info->link_speed = ETH_SPEED_NUM_100M;
- link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_100M;
+ link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case EFX_LINK_100FDX:
- link_info->link_speed = ETH_SPEED_NUM_100M;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_100M;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_1000HDX:
- link_info->link_speed = ETH_SPEED_NUM_1G;
- link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_1G;
+ link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
break;
case EFX_LINK_1000FDX:
- link_info->link_speed = ETH_SPEED_NUM_1G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_1G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_10000FDX:
- link_info->link_speed = ETH_SPEED_NUM_10G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_10G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_25000FDX:
- link_info->link_speed = ETH_SPEED_NUM_25G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_25G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_40000FDX:
- link_info->link_speed = ETH_SPEED_NUM_40G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_40G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_50000FDX:
- link_info->link_speed = ETH_SPEED_NUM_50G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_50G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
case EFX_LINK_100000FDX:
- link_info->link_speed = ETH_SPEED_NUM_100G;
- link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_100G;
+ link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
break;
default:
SFC_ASSERT(B_FALSE);
/* FALLTHROUGH */
case EFX_LINK_UNKNOWN:
case EFX_LINK_DOWN:
- link_info->link_speed = ETH_SPEED_NUM_NONE;
+ link_info->link_speed = RTE_ETH_SPEED_NUM_NONE;
link_info->link_duplex = 0;
break;
}
- link_info->link_autoneg = ETH_LINK_AUTONEG;
+ link_info->link_autoneg = RTE_ETH_LINK_AUTONEG;
}
int
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 280e8a61f9e0..a83b47a8d111 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -647,9 +647,9 @@ struct sfc_dp_rx sfc_efx_rx = {
.hw_fw_caps = SFC_DP_HW_FW_CAP_RX_EFX,
},
.features = SFC_DP_RX_FEAT_INTR,
- .dev_offload_capa = DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_RSS_HASH,
- .queue_offload_capa = DEV_RX_OFFLOAD_SCATTER,
+ .dev_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH,
+ .queue_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER,
.qsize_up_rings = sfc_efx_rx_qsize_up_rings,
.qcreate = sfc_efx_rx_qcreate,
.qdestroy = sfc_efx_rx_qdestroy,
@@ -930,7 +930,7 @@ sfc_rx_get_offload_mask(struct sfc_adapter *sa)
uint64_t no_caps = 0;
if (encp->enc_tunnel_encapsulations_supported == 0)
- no_caps |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ no_caps |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
return ~no_caps;
}
@@ -940,7 +940,7 @@ sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa)
{
uint64_t caps = sa->priv.dp_rx->dev_offload_capa;
- caps |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ caps |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
return caps & sfc_rx_get_offload_mask(sa);
}
@@ -1141,7 +1141,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
if (!sfc_rx_check_scatter(sa->port.pdu, buf_size,
encp->enc_rx_prefix_size,
- (offloads & DEV_RX_OFFLOAD_SCATTER),
+ (offloads & RTE_ETH_RX_OFFLOAD_SCATTER),
encp->enc_rx_scatter_max,
&error)) {
sfc_err(sa, "RxQ %d (internal %u) MTU check failed: %s",
@@ -1167,15 +1167,15 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
rxq_info->type = EFX_RXQ_TYPE_DEFAULT;
rxq_info->type_flags |=
- (offloads & DEV_RX_OFFLOAD_SCATTER) ?
+ (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ?
EFX_RXQ_FLAG_SCATTER : EFX_RXQ_FLAG_NONE;
if ((encp->enc_tunnel_encapsulations_supported != 0) &&
(sfc_dp_rx_offload_capa(sa->priv.dp_rx) &
- DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
rxq_info->type_flags |= EFX_RXQ_FLAG_INNER_CLASSES;
- if (offloads & DEV_RX_OFFLOAD_RSS_HASH)
+ if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)
rxq_info->type_flags |= EFX_RXQ_FLAG_RSS_HASH;
rc = sfc_ev_qinit(sa, SFC_EVQ_TYPE_RX, sw_index,
@@ -1205,7 +1205,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
rxq_info->refill_mb_pool = mb_pool;
if (rss->hash_support == EFX_RX_HASH_AVAILABLE && rss->channels > 0 &&
- (offloads & DEV_RX_OFFLOAD_RSS_HASH))
+ (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
rxq_info->rxq_flags = SFC_RXQ_FLAG_RSS_HASH;
else
rxq_info->rxq_flags = 0;
@@ -1301,19 +1301,19 @@ sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
* Mapping between RTE RSS hash functions and their EFX counterparts.
*/
static const struct sfc_rss_hf_rte_to_efx sfc_rss_hf_map[] = {
- { ETH_RSS_NONFRAG_IPV4_TCP,
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP,
EFX_RX_HASH(IPV4_TCP, 4TUPLE) },
- { ETH_RSS_NONFRAG_IPV4_UDP,
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP,
EFX_RX_HASH(IPV4_UDP, 4TUPLE) },
- { ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX,
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX,
EFX_RX_HASH(IPV6_TCP, 4TUPLE) },
- { ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX,
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX,
EFX_RX_HASH(IPV6_UDP, 4TUPLE) },
- { ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER,
+ { RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
EFX_RX_HASH(IPV4_TCP, 2TUPLE) | EFX_RX_HASH(IPV4_UDP, 2TUPLE) |
EFX_RX_HASH(IPV4, 2TUPLE) },
- { ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER |
- ETH_RSS_IPV6_EX,
+ { RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_IPV6_EX,
EFX_RX_HASH(IPV6_TCP, 2TUPLE) | EFX_RX_HASH(IPV6_UDP, 2TUPLE) |
EFX_RX_HASH(IPV6, 2TUPLE) }
};
@@ -1633,10 +1633,10 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
int rc = 0;
switch (rxmode->mq_mode) {
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
/* No special checks are required */
break;
- case ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_RSS:
if (rss->context_type == EFX_RX_SCALE_UNAVAILABLE) {
sfc_err(sa, "RSS is not available");
rc = EINVAL;
@@ -1653,16 +1653,16 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
* so unsupported offloads cannot be added as the result of
* below check.
*/
- if ((rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM) !=
- (offloads_supported & DEV_RX_OFFLOAD_CHECKSUM)) {
+ if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM) !=
+ (offloads_supported & RTE_ETH_RX_OFFLOAD_CHECKSUM)) {
sfc_warn(sa, "Rx checksum offloads cannot be disabled - always on (IPv4/TCP/UDP)");
- rxmode->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
}
- if ((offloads_supported & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
- (~rxmode->offloads & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+ if ((offloads_supported & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+ (~rxmode->offloads & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
sfc_warn(sa, "Rx outer IPv4 checksum offload cannot be disabled - always on");
- rxmode->offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
}
return rc;
@@ -1808,7 +1808,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
}
configure_rss:
- rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
+ rss->channels = (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) ?
MIN(sas->ethdev_rxq_count, EFX_MAXRSS) : 0;
if (rss->channels > 0) {
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 49b239f4d261..359acc71a47f 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -54,23 +54,23 @@ sfc_tx_get_offload_mask(struct sfc_adapter *sa)
uint64_t no_caps = 0;
if (!encp->enc_hw_tx_insert_vlan_enabled)
- no_caps |= DEV_TX_OFFLOAD_VLAN_INSERT;
+ no_caps |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if (!encp->enc_tunnel_encapsulations_supported)
- no_caps |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ no_caps |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
if (!sa->tso)
- no_caps |= DEV_TX_OFFLOAD_TCP_TSO;
+ no_caps |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (!sa->tso_encap ||
(encp->enc_tunnel_encapsulations_supported &
(1u << EFX_TUNNEL_PROTOCOL_VXLAN)) == 0)
- no_caps |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+ no_caps |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
if (!sa->tso_encap ||
(encp->enc_tunnel_encapsulations_supported &
(1u << EFX_TUNNEL_PROTOCOL_GENEVE)) == 0)
- no_caps |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ no_caps |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
return ~no_caps;
}
@@ -114,8 +114,8 @@ sfc_tx_qcheck_conf(struct sfc_adapter *sa, unsigned int txq_max_fill_level,
}
/* We either perform both TCP and UDP offload, or no offload at all */
- if (((offloads & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) !=
- ((offloads & DEV_TX_OFFLOAD_UDP_CKSUM) == 0)) {
+ if (((offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) !=
+ ((offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0)) {
sfc_err(sa, "TCP and UDP offloads can't be set independently");
rc = EINVAL;
}
@@ -309,7 +309,7 @@ sfc_tx_check_mode(struct sfc_adapter *sa, const struct rte_eth_txmode *txmode)
int rc = 0;
switch (txmode->mq_mode) {
- case ETH_MQ_TX_NONE:
+ case RTE_ETH_MQ_TX_NONE:
break;
default:
sfc_err(sa, "Tx multi-queue mode %u not supported",
@@ -515,23 +515,23 @@ sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
if (rc != 0)
goto fail_ev_qstart;
- if (txq_info->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
flags |= EFX_TXQ_CKSUM_IPV4;
- if (txq_info->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
flags |= EFX_TXQ_CKSUM_INNER_IPV4;
- if ((txq_info->offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
- (txq_info->offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+ if ((txq_info->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+ (txq_info->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
flags |= EFX_TXQ_CKSUM_TCPUDP;
- if (offloads_supported & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+ if (offloads_supported & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
flags |= EFX_TXQ_CKSUM_INNER_TCPUDP;
}
- if (txq_info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+ if (txq_info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
flags |= EFX_TXQ_FATSOV2;
rc = efx_tx_qcreate(sa->nic, txq->hw_index, 0, &txq->mem,
@@ -862,9 +862,9 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/*
* Here VLAN TCI is expected to be zero in case if no
- * DEV_TX_OFFLOAD_VLAN_INSERT capability is advertised;
+ * RTE_ETH_TX_OFFLOAD_VLAN_INSERT capability is advertised;
* if the calling app ignores the absence of
- * DEV_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
+ * RTE_ETH_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
* TX_ERROR will occur
*/
pkt_descs += sfc_efx_tx_maybe_insert_tag(txq, m_seg, &pend);
@@ -1228,13 +1228,13 @@ struct sfc_dp_tx sfc_efx_tx = {
.hw_fw_caps = SFC_DP_HW_FW_CAP_TX_EFX,
},
.features = 0,
- .dev_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_MULTI_SEGS,
- .queue_offload_capa = DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO,
+ .dev_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+ .queue_offload_capa = RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO,
.qsize_up_rings = sfc_efx_tx_qsize_up_rings,
.qcreate = sfc_efx_tx_qcreate,
.qdestroy = sfc_efx_tx_qdestroy,
diff --git a/drivers/net/softnic/rte_eth_softnic.c b/drivers/net/softnic/rte_eth_softnic.c
index b3b55b9035b1..3ef33818a9e0 100644
--- a/drivers/net/softnic/rte_eth_softnic.c
+++ b/drivers/net/softnic/rte_eth_softnic.c
@@ -173,7 +173,7 @@ pmd_dev_start(struct rte_eth_dev *dev)
return status;
/* Link UP */
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return 0;
}
@@ -184,7 +184,7 @@ pmd_dev_stop(struct rte_eth_dev *dev)
struct pmd_internals *p = dev->data->dev_private;
/* Link DOWN */
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
/* Firmware */
softnic_pipeline_disable_all(p);
@@ -386,10 +386,10 @@ pmd_ethdev_register(struct rte_vdev_device *vdev,
/* dev->data */
dev->data->dev_private = dev_private;
- dev->data->dev_link.link_speed = ETH_SPEED_NUM_100G;
- dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
- dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
dev->data->mac_addrs = ð_addr;
dev->data->promiscuous = 1;
dev->data->numa_node = params->cpu_id;
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c
index 7416a6b1b816..255444a4181d 100644
--- a/drivers/net/szedata2/rte_eth_szedata2.c
+++ b/drivers/net/szedata2/rte_eth_szedata2.c
@@ -1042,7 +1042,7 @@ static int
eth_dev_configure(struct rte_eth_dev *dev)
{
struct rte_eth_dev_data *data = dev->data;
- if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+ if (data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
dev->rx_pkt_burst = eth_szedata2_rx_scattered;
data->scattered_rx = 1;
} else {
@@ -1064,11 +1064,11 @@ eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_rx_queues = internals->max_rx_queues;
dev_info->max_tx_queues = internals->max_tx_queues;
dev_info->min_rx_bufsize = 0;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
dev_info->tx_offload_capa = 0;
dev_info->rx_queue_offload_capa = 0;
dev_info->tx_queue_offload_capa = 0;
- dev_info->speed_capa = ETH_LINK_SPEED_100G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
return 0;
}
@@ -1204,10 +1204,10 @@ eth_link_update(struct rte_eth_dev *dev,
memset(&link, 0, sizeof(link));
- link.link_speed = ETH_SPEED_NUM_100G;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_status = ETH_LINK_UP;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
rte_eth_linkstatus_set(dev, &link);
return 0;
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index c515de3bf71d..ad5980ef5280 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -70,16 +70,16 @@
#define TAP_IOV_DEFAULT_MAX 1024
-#define TAP_RX_OFFLOAD (DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM)
+#define TAP_RX_OFFLOAD (RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
-#define TAP_TX_OFFLOAD (DEV_TX_OFFLOAD_MULTI_SEGS | \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO)
+#define TAP_TX_OFFLOAD (RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO)
static int tap_devices_count;
@@ -97,10 +97,10 @@ static const char *valid_arguments[] = {
static volatile uint32_t tap_trigger; /* Rx trigger */
static struct rte_eth_link pmd_link = {
- .link_speed = ETH_SPEED_NUM_10G,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN,
- .link_autoneg = ETH_LINK_FIXED,
+ .link_speed = RTE_ETH_SPEED_NUM_10G,
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN,
+ .link_autoneg = RTE_ETH_LINK_FIXED,
};
static void
@@ -433,7 +433,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
len = readv(process_private->rxq_fds[rxq->queue_id],
*rxq->iovecs,
- 1 + (rxq->rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ?
+ 1 + (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ?
rxq->nb_rx_desc : 1));
if (len < (int)sizeof(struct tun_pi))
break;
@@ -489,7 +489,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
seg->next = NULL;
mbuf->packet_type = rte_net_get_ptype(mbuf, NULL,
RTE_PTYPE_ALL_MASK);
- if (rxq->rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
tap_verify_csum(mbuf);
/* account for the receive frame */
@@ -866,7 +866,7 @@ tap_link_set_down(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct ifreq ifr = { .ifr_flags = IFF_UP };
- dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 0, LOCAL_ONLY);
}
@@ -876,7 +876,7 @@ tap_link_set_up(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct ifreq ifr = { .ifr_flags = IFF_UP };
- dev->data->dev_link.link_status = ETH_LINK_UP;
+ dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 1, LOCAL_AND_REMOTE);
}
@@ -956,30 +956,30 @@ tap_dev_speed_capa(void)
uint32_t speed = pmd_link.link_speed;
uint32_t capa = 0;
- if (speed >= ETH_SPEED_NUM_10M)
- capa |= ETH_LINK_SPEED_10M;
- if (speed >= ETH_SPEED_NUM_100M)
- capa |= ETH_LINK_SPEED_100M;
- if (speed >= ETH_SPEED_NUM_1G)
- capa |= ETH_LINK_SPEED_1G;
- if (speed >= ETH_SPEED_NUM_5G)
- capa |= ETH_LINK_SPEED_2_5G;
- if (speed >= ETH_SPEED_NUM_5G)
- capa |= ETH_LINK_SPEED_5G;
- if (speed >= ETH_SPEED_NUM_10G)
- capa |= ETH_LINK_SPEED_10G;
- if (speed >= ETH_SPEED_NUM_20G)
- capa |= ETH_LINK_SPEED_20G;
- if (speed >= ETH_SPEED_NUM_25G)
- capa |= ETH_LINK_SPEED_25G;
- if (speed >= ETH_SPEED_NUM_40G)
- capa |= ETH_LINK_SPEED_40G;
- if (speed >= ETH_SPEED_NUM_50G)
- capa |= ETH_LINK_SPEED_50G;
- if (speed >= ETH_SPEED_NUM_56G)
- capa |= ETH_LINK_SPEED_56G;
- if (speed >= ETH_SPEED_NUM_100G)
- capa |= ETH_LINK_SPEED_100G;
+ if (speed >= RTE_ETH_SPEED_NUM_10M)
+ capa |= RTE_ETH_LINK_SPEED_10M;
+ if (speed >= RTE_ETH_SPEED_NUM_100M)
+ capa |= RTE_ETH_LINK_SPEED_100M;
+ if (speed >= RTE_ETH_SPEED_NUM_1G)
+ capa |= RTE_ETH_LINK_SPEED_1G;
+ if (speed >= RTE_ETH_SPEED_NUM_5G)
+ capa |= RTE_ETH_LINK_SPEED_2_5G;
+ if (speed >= RTE_ETH_SPEED_NUM_5G)
+ capa |= RTE_ETH_LINK_SPEED_5G;
+ if (speed >= RTE_ETH_SPEED_NUM_10G)
+ capa |= RTE_ETH_LINK_SPEED_10G;
+ if (speed >= RTE_ETH_SPEED_NUM_20G)
+ capa |= RTE_ETH_LINK_SPEED_20G;
+ if (speed >= RTE_ETH_SPEED_NUM_25G)
+ capa |= RTE_ETH_LINK_SPEED_25G;
+ if (speed >= RTE_ETH_SPEED_NUM_40G)
+ capa |= RTE_ETH_LINK_SPEED_40G;
+ if (speed >= RTE_ETH_SPEED_NUM_50G)
+ capa |= RTE_ETH_LINK_SPEED_50G;
+ if (speed >= RTE_ETH_SPEED_NUM_56G)
+ capa |= RTE_ETH_LINK_SPEED_56G;
+ if (speed >= RTE_ETH_SPEED_NUM_100G)
+ capa |= RTE_ETH_LINK_SPEED_100G;
return capa;
}
@@ -1196,15 +1196,15 @@ tap_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, REMOTE_ONLY);
if (!(ifr.ifr_flags & IFF_UP) ||
!(ifr.ifr_flags & IFF_RUNNING)) {
- dev_link->link_status = ETH_LINK_DOWN;
+ dev_link->link_status = RTE_ETH_LINK_DOWN;
return 0;
}
}
tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, LOCAL_ONLY);
dev_link->link_status =
((ifr.ifr_flags & IFF_UP) && (ifr.ifr_flags & IFF_RUNNING) ?
- ETH_LINK_UP :
- ETH_LINK_DOWN);
+ RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN);
return 0;
}
@@ -1391,7 +1391,7 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx, struct rte_eth_dev *dev)
int ret;
/* initialize GSO context */
- gso_types = DEV_TX_OFFLOAD_TCP_TSO;
+ gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO;
if (!pmd->gso_ctx_mp) {
/*
* Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
@@ -1606,9 +1606,9 @@ tap_tx_queue_setup(struct rte_eth_dev *dev,
offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
txq->csum = !!(offloads &
- (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM));
+ (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM));
ret = tap_setup_queue(dev, internals, tx_queue_id, 0);
if (ret == -1)
@@ -1765,7 +1765,7 @@ static int
tap_flow_ctrl_get(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_fc_conf *fc_conf)
{
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -1773,7 +1773,7 @@ static int
tap_flow_ctrl_set(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_fc_conf *fc_conf)
{
- if (fc_conf->mode != RTE_FC_NONE)
+ if (fc_conf->mode != RTE_ETH_FC_NONE)
return -ENOTSUP;
return 0;
}
@@ -2267,7 +2267,7 @@ rte_pmd_tun_probe(struct rte_vdev_device *dev)
}
}
}
- pmd_link.link_speed = ETH_SPEED_NUM_10G;
+ pmd_link.link_speed = RTE_ETH_SPEED_NUM_10G;
TAP_LOG(DEBUG, "Initializing pmd_tun for %s", name);
@@ -2441,7 +2441,7 @@ rte_pmd_tap_probe(struct rte_vdev_device *dev)
return 0;
}
- speed = ETH_SPEED_NUM_10G;
+ speed = RTE_ETH_SPEED_NUM_10G;
/* use tap%d which causes kernel to choose next available */
strlcpy(tap_name, DEFAULT_TAP_NAME "%d", RTE_ETH_NAME_MAX_LEN);
--git a/drivers/net/tap/tap_rss.h b/drivers/net/tap/tap_rss.h
index 176e7180bdaa..48c151cf6b68 100644
--- a/drivers/net/tap/tap_rss.h
+++ b/drivers/net/tap/tap_rss.h
@@ -13,7 +13,7 @@
#define TAP_RSS_HASH_KEY_SIZE 40
/* Supported RSS */
-#define TAP_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP))
+#define TAP_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP))
/* hashed fields for RSS */
enum hash_field {
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index fc1844ddfce1..26861e4103d9 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -61,14 +61,14 @@ nicvf_link_status_update(struct nicvf *nic,
{
memset(link, 0, sizeof(*link));
- link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+ link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
if (nic->duplex == NICVF_HALF_DUPLEX)
- link->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
else if (nic->duplex == NICVF_FULL_DUPLEX)
- link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
link->link_speed = nic->speed;
- link->link_autoneg = ETH_LINK_AUTONEG;
+ link->link_autoneg = RTE_ETH_LINK_AUTONEG;
}
static void
@@ -134,7 +134,7 @@ nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
/* rte_eth_link_get() might need to wait up to 9 seconds */
for (i = 0; i < MAX_CHECK_TIME; i++) {
nicvf_link_status_update(nic, &link);
- if (link.link_status == ETH_LINK_UP)
+ if (link.link_status == RTE_ETH_LINK_UP)
break;
rte_delay_ms(CHECK_INTERVAL);
}
@@ -177,9 +177,9 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
if (frame_size > NIC_HW_L2_MAX_LEN)
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
- rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
if (nicvf_mbox_update_hw_max_frs(nic, mtu))
return -EINVAL;
@@ -404,35 +404,35 @@ nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
{
uint64_t nic_rss = 0;
- if (ethdev_rss & ETH_RSS_IPV4)
+ if (ethdev_rss & RTE_ETH_RSS_IPV4)
nic_rss |= RSS_IP_ENA;
- if (ethdev_rss & ETH_RSS_IPV6)
+ if (ethdev_rss & RTE_ETH_RSS_IPV6)
nic_rss |= RSS_IP_ENA;
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
- if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
- if (ethdev_rss & ETH_RSS_PORT)
+ if (ethdev_rss & RTE_ETH_RSS_PORT)
nic_rss |= RSS_L2_EXTENDED_HASH_ENA;
if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
- if (ethdev_rss & ETH_RSS_VXLAN)
+ if (ethdev_rss & RTE_ETH_RSS_VXLAN)
nic_rss |= RSS_TUN_VXLAN_ENA;
- if (ethdev_rss & ETH_RSS_GENEVE)
+ if (ethdev_rss & RTE_ETH_RSS_GENEVE)
nic_rss |= RSS_TUN_GENEVE_ENA;
- if (ethdev_rss & ETH_RSS_NVGRE)
+ if (ethdev_rss & RTE_ETH_RSS_NVGRE)
nic_rss |= RSS_TUN_NVGRE_ENA;
}
@@ -445,28 +445,28 @@ nicvf_rss_nic_to_ethdev(struct nicvf *nic, uint64_t nic_rss)
uint64_t ethdev_rss = 0;
if (nic_rss & RSS_IP_ENA)
- ethdev_rss |= (ETH_RSS_IPV4 | ETH_RSS_IPV6);
+ ethdev_rss |= (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6);
if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_TCP_ENA))
- ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_TCP |
- ETH_RSS_NONFRAG_IPV6_TCP);
+ ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP);
if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_UDP_ENA))
- ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_UDP |
- ETH_RSS_NONFRAG_IPV6_UDP);
+ ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP);
if (nic_rss & RSS_L2_EXTENDED_HASH_ENA)
- ethdev_rss |= ETH_RSS_PORT;
+ ethdev_rss |= RTE_ETH_RSS_PORT;
if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
if (nic_rss & RSS_TUN_VXLAN_ENA)
- ethdev_rss |= ETH_RSS_VXLAN;
+ ethdev_rss |= RTE_ETH_RSS_VXLAN;
if (nic_rss & RSS_TUN_GENEVE_ENA)
- ethdev_rss |= ETH_RSS_GENEVE;
+ ethdev_rss |= RTE_ETH_RSS_GENEVE;
if (nic_rss & RSS_TUN_NVGRE_ENA)
- ethdev_rss |= ETH_RSS_NVGRE;
+ ethdev_rss |= RTE_ETH_RSS_NVGRE;
}
return ethdev_rss;
}
@@ -821,9 +821,9 @@ nicvf_configure_rss(struct rte_eth_dev *dev)
dev->data->nb_rx_queues,
dev->data->dev_conf.lpbk_mode, rsshf);
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
ret = nicvf_rss_term(nic);
- else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
ret = nicvf_rss_config(nic, dev->data->nb_rx_queues, rsshf);
if (ret)
PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret);
@@ -884,7 +884,7 @@ nicvf_set_tx_function(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_tx_queues; i++) {
txq = dev->data->tx_queues[i];
- if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
multiseg = true;
break;
}
@@ -1007,7 +1007,7 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
txq->offloads = offloads;
- is_single_pool = !!(offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE);
+ is_single_pool = !!(offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE);
/* Choose optimum free threshold value for multipool case */
if (!is_single_pool) {
@@ -1397,11 +1397,11 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
PMD_INIT_FUNC_TRACE();
/* Autonegotiation may be disabled */
- dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
- dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
- ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+ RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
if (nicvf_hw_version(nic) != PCI_SUB_DEVICE_ID_CN81XX_NICVF)
- dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU;
dev_info->max_rx_pktlen = NIC_HW_MAX_MTU + RTE_ETHER_HDR_LEN;
@@ -1430,10 +1430,10 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->default_txconf = (struct rte_eth_txconf) {
.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
- .offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE |
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM,
+ .offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM,
};
return 0;
@@ -1597,8 +1597,8 @@ nicvf_vf_start(struct rte_eth_dev *dev, struct nicvf *nic, uint32_t rbdrsz)
nic->rbdr->tail, nb_rbdr_desc, nic->vf_id);
/* Configure VLAN Strip */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
ret = nicvf_vlan_offload_config(dev, mask);
/* Based on the packet type(IPv4 or IPv6), the nicvf HW aligns L3 data
@@ -1727,11 +1727,11 @@ nicvf_dev_start(struct rte_eth_dev *dev)
if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
2 * VLAN_TAG_SIZE > buffsz)
dev->data->scattered_rx = 1;
- if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
+ if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) != 0)
dev->data->scattered_rx = 1;
/* Setup MTU based on max_rx_pkt_len or default */
- mtu = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ?
+ mtu = dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME ?
dev->data->dev_conf.rxmode.max_rx_pkt_len
- RTE_ETHER_HDR_LEN : RTE_ETHER_MTU;
@@ -1914,8 +1914,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
- rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (!rte_eal_has_hugepages()) {
PMD_INIT_LOG(INFO, "Huge page is not configured");
@@ -1927,8 +1927,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
- rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
PMD_INIT_LOG(INFO, "Unsupported rx qmode %d", rxmode->mq_mode);
return -EINVAL;
}
@@ -1938,7 +1938,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_INIT_LOG(INFO, "Setting link speed/duplex not supported");
return -EINVAL;
}
@@ -1973,7 +1973,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
}
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
nic->offload_cksum = 1;
PMD_INIT_LOG(DEBUG, "Configured ethdev port%d hwcap=0x%" PRIx64,
@@ -2050,8 +2050,8 @@ nicvf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
struct nicvf *nic = nicvf_pmd_priv(dev);
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
nicvf_vlan_hw_strip(nic, true);
else
nicvf_vlan_hw_strip(nic, false);
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index b8dd905d0bd6..c1876bb9e1b7 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -16,33 +16,33 @@
#define NICVF_UNKNOWN_DUPLEX 0xff
#define NICVF_RSS_OFFLOAD_PASS1 ( \
- ETH_RSS_PORT | \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_PORT | \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define NICVF_RSS_OFFLOAD_TUNNEL ( \
- ETH_RSS_VXLAN | \
- ETH_RSS_GENEVE | \
- ETH_RSS_NVGRE)
+ RTE_ETH_RSS_VXLAN | \
+ RTE_ETH_RSS_GENEVE | \
+ RTE_ETH_RSS_NVGRE)
#define NICVF_TX_OFFLOAD_CAPA ( \
- DEV_TX_OFFLOAD_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define NICVF_RX_OFFLOAD_CAPA ( \
- DEV_RX_OFFLOAD_CHECKSUM | \
- DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
#define NICVF_DEFAULT_RX_FREE_THRESH 224
#define NICVF_DEFAULT_TX_FREE_THRESH 224
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 006399468841..a42b7bfe55ae 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -997,7 +997,7 @@ txgbe_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
rxbal = rd32(hw, TXGBE_RXBAL(rxq->reg_idx));
rxbah = rd32(hw, TXGBE_RXBAH(rxq->reg_idx));
rxcfg = rd32(hw, TXGBE_RXCFG(rxq->reg_idx));
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
restart = (rxcfg & TXGBE_RXCFG_ENA) &&
!(rxcfg & TXGBE_RXCFG_VLAN);
rxcfg |= TXGBE_RXCFG_VLAN;
@@ -1032,7 +1032,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
vlan_ext = (portctrl & TXGBE_PORTCTL_VLANEXT);
qinq = vlan_ext && (portctrl & TXGBE_PORTCTL_QINQ);
switch (vlan_type) {
- case ETH_VLAN_TYPE_INNER:
+ case RTE_ETH_VLAN_TYPE_INNER:
if (vlan_ext) {
wr32m(hw, TXGBE_VLANCTL,
TXGBE_VLANCTL_TPID_MASK,
@@ -1052,7 +1052,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
TXGBE_TAGTPID_LSB(tpid));
}
break;
- case ETH_VLAN_TYPE_OUTER:
+ case RTE_ETH_VLAN_TYPE_OUTER:
if (vlan_ext) {
/* Only the high 16-bits is valid */
wr32m(hw, TXGBE_EXTAG,
@@ -1137,10 +1137,10 @@ txgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
if (on) {
rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
} else {
rxq->vlan_flags = PKT_RX_VLAN;
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
@@ -1239,7 +1239,7 @@ txgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
txgbe_vlan_strip_queue_set(dev, i, 1);
else
txgbe_vlan_strip_queue_set(dev, i, 0);
@@ -1253,17 +1253,17 @@ txgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
struct txgbe_rx_queue *rxq;
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
rxmode = &dev->data->dev_conf.rxmode;
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
else
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
}
}
@@ -1274,25 +1274,25 @@ txgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
struct rte_eth_rxmode *rxmode;
rxmode = &dev->data->dev_conf.rxmode;
- if (mask & ETH_VLAN_STRIP_MASK)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK)
txgbe_vlan_hw_strip_config(dev);
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
txgbe_vlan_hw_filter_enable(dev);
else
txgbe_vlan_hw_filter_disable(dev);
}
- if (mask & ETH_VLAN_EXTEND_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
txgbe_vlan_hw_extend_enable(dev);
else
txgbe_vlan_hw_extend_disable(dev);
}
- if (mask & ETH_QINQ_STRIP_MASK) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+ if (mask & RTE_ETH_QINQ_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
txgbe_qinq_hw_strip_enable(dev);
else
txgbe_qinq_hw_strip_disable(dev);
@@ -1330,10 +1330,10 @@ txgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
switch (nb_rx_q) {
case 1:
case 2:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
break;
case 4:
- RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
break;
default:
return -EINVAL;
@@ -1356,18 +1356,18 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* check multi-queue mode */
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
break;
- case ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
PMD_INIT_LOG(ERR, "SRIOV active,"
" unsupported mq_mode rx %d.",
dev_conf->rxmode.mq_mode);
return -EINVAL;
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
+ dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
if (txgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -1377,13 +1377,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
break;
- case ETH_MQ_RX_VMDQ_ONLY:
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_NONE:
/* if nothing mq mode configure, use default scheme */
dev->data->dev_conf.rxmode.mq_mode =
- ETH_MQ_RX_VMDQ_ONLY;
+ RTE_ETH_MQ_RX_VMDQ_ONLY;
break;
- default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+ default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
/* SRIOV only works in VMDq enable mode */
PMD_INIT_LOG(ERR, "SRIOV is active,"
" wrong mq_mode rx %d.",
@@ -1392,13 +1392,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
}
switch (dev_conf->txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
- PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
- dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
+ PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+ dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
break;
- default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
+ default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
dev->data->dev_conf.txmode.mq_mode =
- ETH_MQ_TX_VMDQ_ONLY;
+ RTE_ETH_MQ_TX_VMDQ_ONLY;
break;
}
@@ -1413,13 +1413,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
} else {
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
" not supported.");
return -EINVAL;
}
/* check configuration for vmdb+dcb mode */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_conf *conf;
if (nb_rx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1428,15 +1428,15 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools must be %d or %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_tx_conf *conf;
if (nb_tx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1445,39 +1445,39 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
return -EINVAL;
}
conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
- if (!(conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
+ if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+ conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
" nb_queue_pools != %d and"
" nb_queue_pools != %d.",
- ETH_16_POOLS, ETH_32_POOLS);
+ RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
return -EINVAL;
}
}
/* For DCB mode check our configuration before we go further */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+ if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
const struct rte_eth_dcb_rx_conf *conf;
conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
const struct rte_eth_dcb_tx_conf *conf;
conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
- if (!(conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
+ if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+ conf->nb_tcs == RTE_ETH_8_TCS)) {
PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
" and nb_tcs != %d.",
- ETH_4_TCS, ETH_8_TCS);
+ RTE_ETH_4_TCS, RTE_ETH_8_TCS);
return -EINVAL;
}
}
@@ -1494,8 +1494,8 @@ txgbe_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/* multiple queue mode checking */
ret = txgbe_check_mq_mode(dev);
@@ -1637,7 +1637,7 @@ txgbe_dev_start(struct rte_eth_dev *dev)
* - half duplex (checked afterwards for valid speeds)
* - fixed speed: TODO implement
*/
- if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
+ if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_INIT_LOG(ERR,
"Invalid link_speeds for port %u, fix speed not supported",
dev->data->port_id);
@@ -1704,15 +1704,15 @@ txgbe_dev_start(struct rte_eth_dev *dev)
goto error;
}
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = txgbe_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
goto error;
}
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
/* Enable vlan filtering for VMDq */
txgbe_vmdq_vlan_hw_filter_enable(dev);
}
@@ -1773,8 +1773,8 @@ txgbe_dev_start(struct rte_eth_dev *dev)
if (err)
goto error;
- allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
- ETH_LINK_SPEED_10G;
+ allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+ RTE_ETH_LINK_SPEED_10G;
link_speeds = &dev->data->dev_conf.link_speeds;
if (*link_speeds & ~allowed_speeds) {
@@ -1783,20 +1783,20 @@ txgbe_dev_start(struct rte_eth_dev *dev)
}
speed = 0x0;
- if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
speed = (TXGBE_LINK_SPEED_100M_FULL |
TXGBE_LINK_SPEED_1GB_FULL |
TXGBE_LINK_SPEED_10GB_FULL);
} else {
- if (*link_speeds & ETH_LINK_SPEED_10G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
speed |= TXGBE_LINK_SPEED_10GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
speed |= TXGBE_LINK_SPEED_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_2_5G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
speed |= TXGBE_LINK_SPEED_2_5GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_1G)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
speed |= TXGBE_LINK_SPEED_1GB_FULL;
- if (*link_speeds & ETH_LINK_SPEED_100M)
+ if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
speed |= TXGBE_LINK_SPEED_100M_FULL;
}
@@ -2611,7 +2611,7 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->vmdq_queue_num = dev_info->max_rx_queues;
dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
@@ -2644,11 +2644,11 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->tx_desc_lim = tx_desc_lim;
dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
- dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
- dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
/* Driver-preferred Rx/Tx parameters */
dev_info->default_rxportconf.burst_size = 32;
@@ -2705,10 +2705,10 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
int wait = 1;
memset(&link, 0, sizeof(link));
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
- link.link_duplex = ETH_LINK_HALF_DUPLEX;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
hw->mac.get_link_status = true;
@@ -2722,8 +2722,8 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
if (err != 0) {
- link.link_speed = ETH_SPEED_NUM_100M;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -2742,34 +2742,34 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
}
intr->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
switch (link_speed) {
default:
case TXGBE_LINK_SPEED_UNKNOWN:
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case TXGBE_LINK_SPEED_100M_FULL:
- link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
break;
case TXGBE_LINK_SPEED_1GB_FULL:
- link.link_speed = ETH_SPEED_NUM_1G;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
break;
case TXGBE_LINK_SPEED_2_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_2_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
break;
case TXGBE_LINK_SPEED_5GB_FULL:
- link.link_speed = ETH_SPEED_NUM_5G;
+ link.link_speed = RTE_ETH_SPEED_NUM_5G;
break;
case TXGBE_LINK_SPEED_10GB_FULL:
- link.link_speed = ETH_SPEED_NUM_10G;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
break;
}
@@ -2994,7 +2994,7 @@ txgbe_dev_link_status_print(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
(int)(dev->data->port_id),
(unsigned int)link.link_speed,
- link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
"full-duplex" : "half-duplex");
} else {
PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3225,13 +3225,13 @@ txgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
tx_pause = 0;
if (rx_pause && tx_pause)
- fc_conf->mode = RTE_FC_FULL;
+ fc_conf->mode = RTE_ETH_FC_FULL;
else if (rx_pause)
- fc_conf->mode = RTE_FC_RX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
else if (tx_pause)
- fc_conf->mode = RTE_FC_TX_PAUSE;
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
else
- fc_conf->mode = RTE_FC_NONE;
+ fc_conf->mode = RTE_ETH_FC_NONE;
return 0;
}
@@ -3363,10 +3363,10 @@ txgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
return -ENOTSUP;
}
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
@@ -3404,10 +3404,10 @@ txgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
"(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ "(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
return -EINVAL;
}
@@ -3593,12 +3593,12 @@ txgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
return -ENOTSUP;
if (on) {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = ~0;
wr32(hw, TXGBE_UCADDRTBL(i), ~0);
}
} else {
- for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
uta_info->uta_shadow[i] = 0;
wr32(hw, TXGBE_UCADDRTBL(i), 0);
}
@@ -3622,15 +3622,15 @@ txgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
{
uint32_t new_val = orig_val;
- if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
new_val |= TXGBE_POOLETHCTL_UTA;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
new_val |= TXGBE_POOLETHCTL_MCHA;
- if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
new_val |= TXGBE_POOLETHCTL_UCHA;
- if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
new_val |= TXGBE_POOLETHCTL_BCA;
- if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+ if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
new_val |= TXGBE_POOLETHCTL_MCP;
return new_val;
@@ -4281,15 +4281,15 @@ txgbe_start_timecounters(struct rte_eth_dev *dev)
rte_eth_linkstatus_get(dev, &link);
switch (link.link_speed) {
- case ETH_SPEED_NUM_100M:
+ case RTE_ETH_SPEED_NUM_100M:
incval = TXGBE_INCVAL_100;
shift = TXGBE_INCVAL_SHIFT_100;
break;
- case ETH_SPEED_NUM_1G:
+ case RTE_ETH_SPEED_NUM_1G:
incval = TXGBE_INCVAL_1GB;
shift = TXGBE_INCVAL_SHIFT_1GB;
break;
- case ETH_SPEED_NUM_10G:
+ case RTE_ETH_SPEED_NUM_10G:
default:
incval = TXGBE_INCVAL_10GB;
shift = TXGBE_INCVAL_SHIFT_10GB;
@@ -4645,7 +4645,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
uint8_t nb_tcs;
uint8_t i, j;
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
else
dcb_info->nb_tcs = 1;
@@ -4656,7 +4656,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
if (dcb_config->vt_mode) { /* vt is enabled */
struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
for (j = 0; j < nb_tcs; j++) {
@@ -4680,9 +4680,9 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
} else { /* vt is disabled */
struct rte_eth_dcb_rx_conf *rx_conf =
&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
- if (dcb_info->nb_tcs == ETH_4_TCS) {
+ if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4695,7 +4695,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
- } else if (dcb_info->nb_tcs == ETH_8_TCS) {
+ } else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
for (i = 0; i < dcb_info->nb_tcs; i++) {
dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4925,7 +4925,7 @@ txgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
}
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = txgbe_e_tag_filter_add(dev, l2_tunnel);
break;
default:
@@ -4956,7 +4956,7 @@ txgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
return ret;
switch (l2_tunnel->l2_tunnel_type) {
- case RTE_L2_TUNNEL_TYPE_E_TAG:
+ case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
ret = txgbe_e_tag_filter_del(dev, l2_tunnel);
break;
default:
@@ -4996,7 +4996,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
ret = -EINVAL;
@@ -5004,7 +5004,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_VXLANPORT, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add Geneve port 0 is not allowed.");
ret = -EINVAL;
@@ -5012,7 +5012,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_GENEVEPORT, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add Teredo port 0 is not allowed.");
ret = -EINVAL;
@@ -5020,7 +5020,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_TEREDOPORT, udp_tunnel->udp_port);
break;
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
if (udp_tunnel->udp_port == 0) {
PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
ret = -EINVAL;
@@ -5052,7 +5052,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
return -EINVAL;
switch (udp_tunnel->prot_type) {
- case RTE_TUNNEL_TYPE_VXLAN:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN:
cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORT);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5062,7 +5062,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_VXLANPORT, 0);
break;
- case RTE_TUNNEL_TYPE_GENEVE:
+ case RTE_ETH_TUNNEL_TYPE_GENEVE:
cur_port = (uint16_t)rd32(hw, TXGBE_GENEVEPORT);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5072,7 +5072,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_GENEVEPORT, 0);
break;
- case RTE_TUNNEL_TYPE_TEREDO:
+ case RTE_ETH_TUNNEL_TYPE_TEREDO:
cur_port = (uint16_t)rd32(hw, TXGBE_TEREDOPORT);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5082,7 +5082,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
wr32(hw, TXGBE_TEREDOPORT, 0);
break;
- case RTE_TUNNEL_TYPE_VXLAN_GPE:
+ case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORTGPE);
if (cur_port != udp_tunnel->udp_port) {
PMD_DRV_LOG(ERR, "Port %u does not exist.",
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965c8..75a9e2580e27 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -56,15 +56,15 @@
#define TXGBE_5TUPLE_MIN_PRI 1
#define TXGBE_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
#define TXGBE_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
#define TXGBE_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 18ed94bd277b..05773cb20786 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -491,14 +491,14 @@ txgbevf_dev_info_get(struct rte_eth_dev *dev,
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
- dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
dev_info->rx_queue_offload_capa);
dev_info->tx_queue_offload_capa = txgbe_get_tx_queue_offloads(dev);
dev_info->tx_offload_capa = txgbe_get_tx_port_offloads(dev);
dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
- dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -579,22 +579,22 @@ txgbevf_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
dev->data->port_id);
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
/*
* VF has no ability to enable/disable HW CRC
* Keep the persistent behavior the same as Host PF
*/
#ifndef RTE_LIBRTE_TXGBE_PF_DISABLE_STRIP_CRC
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
- conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#else
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+ if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+ conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
}
#endif
@@ -652,8 +652,8 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
txgbevf_set_vfta_all(dev, 1);
/* Set HW strip */
- mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
- ETH_VLAN_EXTEND_MASK;
+ mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+ RTE_ETH_VLAN_EXTEND_MASK;
err = txgbevf_vlan_offload_config(dev, mask);
if (err) {
PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -896,10 +896,10 @@ txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
int on = 0;
/* VF function only support hw strip feature, others are not support */
- if (mask & ETH_VLAN_STRIP_MASK) {
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
- on = !!(rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ on = !!(rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
txgbevf_vlan_strip_queue_set(dev, i, on);
}
}
diff --git a/drivers/net/txgbe/txgbe_fdir.c b/drivers/net/txgbe/txgbe_fdir.c
index 8abb86228608..e303d87176ed 100644
--- a/drivers/net/txgbe/txgbe_fdir.c
+++ b/drivers/net/txgbe/txgbe_fdir.c
@@ -102,22 +102,22 @@ txgbe_fdir_enable(struct txgbe_hw *hw, uint32_t fdirctrl)
* flexbytes matching field, and drop queue (only for perfect matching mode).
*/
static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf,
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf,
uint32_t *fdirctrl, uint32_t *flex)
{
*fdirctrl = 0;
*flex = 0;
switch (conf->pballoc) {
- case RTE_FDIR_PBALLOC_64K:
+ case RTE_ETH_FDIR_PBALLOC_64K:
/* 8k - 1 signature filters */
*fdirctrl |= TXGBE_FDIRCTL_BUF_64K;
break;
- case RTE_FDIR_PBALLOC_128K:
+ case RTE_ETH_FDIR_PBALLOC_128K:
/* 16k - 1 signature filters */
*fdirctrl |= TXGBE_FDIRCTL_BUF_128K;
break;
- case RTE_FDIR_PBALLOC_256K:
+ case RTE_ETH_FDIR_PBALLOC_256K:
/* 32k - 1 signature filters */
*fdirctrl |= TXGBE_FDIRCTL_BUF_256K;
break;
@@ -521,15 +521,15 @@ txgbe_atr_compute_hash(struct txgbe_atr_input *atr_input,
static uint32_t
atr_compute_perfect_hash(struct txgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
uint32_t bucket_hash;
bucket_hash = txgbe_atr_compute_hash(input,
TXGBE_ATR_BUCKET_HASH_KEY);
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
bucket_hash &= PERFECT_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
bucket_hash &= PERFECT_BUCKET_128KB_HASH_MASK;
else
bucket_hash &= PERFECT_BUCKET_64KB_HASH_MASK;
@@ -564,15 +564,15 @@ txgbe_fdir_check_cmd_complete(struct txgbe_hw *hw, uint32_t *fdircmd)
*/
static uint32_t
atr_compute_signature_hash(struct txgbe_atr_input *input,
- enum rte_fdir_pballoc_type pballoc)
+ enum rte_eth_fdir_pballoc_type pballoc)
{
uint32_t bucket_hash, sig_hash;
bucket_hash = txgbe_atr_compute_hash(input,
TXGBE_ATR_BUCKET_HASH_KEY);
- if (pballoc == RTE_FDIR_PBALLOC_256K)
+ if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
bucket_hash &= SIG_BUCKET_256KB_HASH_MASK;
- else if (pballoc == RTE_FDIR_PBALLOC_128K)
+ else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
bucket_hash &= SIG_BUCKET_128KB_HASH_MASK;
else
bucket_hash &= SIG_BUCKET_64KB_HASH_MASK;
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index eae400b14176..6d7fd1842843 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -1215,7 +1215,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
return -rte_errno;
}
- filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+ filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
/**
* grp and e_cid_base are bit fields and only use 14 bits.
* e-tag id is taken as little endian by HW.
diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c
index ccd747973ba2..445733f3ba46 100644
--- a/drivers/net/txgbe/txgbe_ipsec.c
+++ b/drivers/net/txgbe/txgbe_ipsec.c
@@ -372,7 +372,7 @@ txgbe_crypto_create_session(void *device,
aead_xform = &conf->crypto_xform->aead;
if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
ic_session->op = TXGBE_OP_AUTHENTICATED_DECRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -380,7 +380,7 @@ txgbe_crypto_create_session(void *device,
return -ENOTSUP;
}
} else {
- if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
ic_session->op = TXGBE_OP_AUTHENTICATED_ENCRYPTION;
} else {
PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -611,11 +611,11 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
tx_offloads = dev->data->dev_conf.txmode.offloads;
/* sanity checks */
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
return -1;
}
- if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
return -1;
}
@@ -634,7 +634,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
reg |= TXGBE_SECRXCTL_CRCSTRIP;
wr32(hw, TXGBE_SECRXCTL, reg);
- if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
wr32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA, 0);
reg = rd32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA);
if (reg != 0) {
@@ -642,7 +642,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
return -1;
}
}
- if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
wr32(hw, TXGBE_SECTXCTL, TXGBE_SECTXCTL_STFWD);
reg = rd32(hw, TXGBE_SECTXCTL);
if (reg != TXGBE_SECTXCTL_STFWD) {
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index 494d779a3c9d..44f6f103edd2 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -103,15 +103,15 @@ int txgbe_pf_host_init(struct rte_eth_dev *eth_dev)
memset(uta_info, 0, sizeof(struct txgbe_uta_info));
hw->mac.mc_filter_type = 0;
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
nb_queue = 2;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
- } else if (vf_num >= ETH_16_POOLS) {
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
nb_queue = 4;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
} else {
nb_queue = 8;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
}
RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -258,13 +258,13 @@ int txgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
gcr_ext &= ~TXGBE_PORTCTL_NUMVT_MASK;
switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
gcr_ext |= TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
gcr_ext |= TXGBE_PORTCTL_NUMVT_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
gcr_ext |= TXGBE_PORTCTL_NUMVT_16;
break;
}
@@ -613,29 +613,29 @@ txgbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
/* Notify VF of number of DCB traffic classes */
eth_conf = ð_dev->data->dev_conf;
switch (eth_conf->txmode.mq_mode) {
- case ETH_MQ_TX_NONE:
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_NONE:
+ case RTE_ETH_MQ_TX_DCB:
PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
", but its tx mode = %d\n", vf,
eth_conf->txmode.mq_mode);
return -1;
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
vmdq_dcb_tx_conf = ð_conf->tx_adv_conf.vmdq_dcb_tx_conf;
switch (vmdq_dcb_tx_conf->nb_queue_pools) {
- case ETH_16_POOLS:
- num_tcs = ETH_8_TCS;
+ case RTE_ETH_16_POOLS:
+ num_tcs = RTE_ETH_8_TCS;
break;
- case ETH_32_POOLS:
- num_tcs = ETH_4_TCS;
+ case RTE_ETH_32_POOLS:
+ num_tcs = RTE_ETH_4_TCS;
break;
default:
return -1;
}
break;
- /* ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
- case ETH_MQ_TX_VMDQ_ONLY:
+ /* RTE_ETH_MQ_TX_VMDQ_ONLY, DCB not enabled */
+ case RTE_ETH_MQ_TX_VMDQ_ONLY:
hw = TXGBE_DEV_HW(eth_dev);
vmvir = rd32(hw, TXGBE_POOLTAG(vf));
vlana = vmvir & TXGBE_POOLTAG_ACT_MASK;
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1bd..c302d49af728 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1939,7 +1939,7 @@ txgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
uint64_t
txgbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
{
- return DEV_RX_OFFLOAD_VLAN_STRIP;
+ return RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
uint64_t
@@ -1949,35 +1949,35 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_eth_dev_sriov *sriov = &RTE_ETH_DEV_SRIOV(dev);
- offloads = DEV_RX_OFFLOAD_IPV4_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_RSS_HASH |
- DEV_RX_OFFLOAD_SCATTER;
+ offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
if (!txgbe_is_vf(dev))
- offloads |= (DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_QINQ_STRIP |
- DEV_RX_OFFLOAD_VLAN_EXTEND);
+ offloads |= (RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
/*
* RSC is only supported by PF devices in a non-SR-IOV
* mode.
*/
if (hw->mac.type == txgbe_mac_raptor && !sriov->active)
- offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+ offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
if (hw->mac.type == txgbe_mac_raptor)
- offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+ offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
- offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+ offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- offloads |= DEV_RX_OFFLOAD_SECURITY;
+ offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
#endif
return offloads;
@@ -2202,32 +2202,32 @@ txgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
uint64_t tx_offload_capa;
tx_offload_capa =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO |
- DEV_TX_OFFLOAD_UDP_TSO |
- DEV_TX_OFFLOAD_UDP_TNL_TSO |
- DEV_TX_OFFLOAD_IP_TNL_TSO |
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
- DEV_TX_OFFLOAD_GRE_TNL_TSO |
- DEV_TX_OFFLOAD_IPIP_TNL_TSO |
- DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
if (!txgbe_is_vf(dev))
- tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
- tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
- tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
- tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
#endif
return tx_offload_capa;
}
@@ -2329,7 +2329,7 @@ txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->tx_deferred_start = tx_conf->tx_deferred_start;
#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
- DEV_TX_OFFLOAD_SECURITY);
+ RTE_ETH_TX_OFFLOAD_SECURITY);
#endif
/* Modification to set tail pointer for virtual function
@@ -2579,7 +2579,7 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
rxq->port_id = dev->data->port_id;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -2880,20 +2880,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
if (hw->mac.type == txgbe_mac_raptor_vf) {
mrqc = rd32(hw, TXGBE_VFPLCFG);
mrqc &= ~TXGBE_VFPLCFG_RSSMASK;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= TXGBE_VFPLCFG_RSSIPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= TXGBE_VFPLCFG_RSSIPV4TCP;
- if (rss_hf & ETH_RSS_IPV6 ||
- rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6 ||
+ rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= TXGBE_VFPLCFG_RSSIPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
- rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+ rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= TXGBE_VFPLCFG_RSSIPV6TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= TXGBE_VFPLCFG_RSSIPV4UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
- rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+ rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= TXGBE_VFPLCFG_RSSIPV6UDP;
if (rss_hf)
@@ -2910,20 +2910,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
} else {
mrqc = rd32(hw, TXGBE_RACTL);
mrqc &= ~TXGBE_RACTL_RSSMASK;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
mrqc |= TXGBE_RACTL_RSSIPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
mrqc |= TXGBE_RACTL_RSSIPV4TCP;
- if (rss_hf & ETH_RSS_IPV6 ||
- rss_hf & ETH_RSS_IPV6_EX)
+ if (rss_hf & RTE_ETH_RSS_IPV6 ||
+ rss_hf & RTE_ETH_RSS_IPV6_EX)
mrqc |= TXGBE_RACTL_RSSIPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
- rss_hf & ETH_RSS_IPV6_TCP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+ rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
mrqc |= TXGBE_RACTL_RSSIPV6TCP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
mrqc |= TXGBE_RACTL_RSSIPV4UDP;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
- rss_hf & ETH_RSS_IPV6_UDP_EX)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+ rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
mrqc |= TXGBE_RACTL_RSSIPV6UDP;
if (rss_hf)
@@ -2964,39 +2964,39 @@ txgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
if (hw->mac.type == txgbe_mac_raptor_vf) {
mrqc = rd32(hw, TXGBE_VFPLCFG);
if (mrqc & TXGBE_VFPLCFG_RSSIPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & TXGBE_VFPLCFG_RSSIPV4TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & TXGBE_VFPLCFG_RSSIPV6)
- rss_hf |= ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX;
if (mrqc & TXGBE_VFPLCFG_RSSIPV6TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & TXGBE_VFPLCFG_RSSIPV4UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & TXGBE_VFPLCFG_RSSIPV6UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
if (!(mrqc & TXGBE_VFPLCFG_RSSENA))
rss_hf = 0;
} else {
mrqc = rd32(hw, TXGBE_RACTL);
if (mrqc & TXGBE_RACTL_RSSIPV4)
- rss_hf |= ETH_RSS_IPV4;
+ rss_hf |= RTE_ETH_RSS_IPV4;
if (mrqc & TXGBE_RACTL_RSSIPV4TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
if (mrqc & TXGBE_RACTL_RSSIPV6)
- rss_hf |= ETH_RSS_IPV6 |
- ETH_RSS_IPV6_EX;
+ rss_hf |= RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_IPV6_EX;
if (mrqc & TXGBE_RACTL_RSSIPV6TCP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
- ETH_RSS_IPV6_TCP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+ RTE_ETH_RSS_IPV6_TCP_EX;
if (mrqc & TXGBE_RACTL_RSSIPV4UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
if (mrqc & TXGBE_RACTL_RSSIPV6UDP)
- rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
- ETH_RSS_IPV6_UDP_EX;
+ rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_IPV6_UDP_EX;
if (!(mrqc & TXGBE_RACTL_RSSENA))
rss_hf = 0;
}
@@ -3026,7 +3026,7 @@ txgbe_rss_configure(struct rte_eth_dev *dev)
*/
if (adapter->rss_reta_updated == 0) {
reta = 0;
- for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+ for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
if (j == dev->data->nb_rx_queues)
j = 0;
reta = (reta >> 8) | LS32(j, 24, 0xFF);
@@ -3063,12 +3063,12 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
num_pools = cfg->nb_queue_pools;
/* Check we have a valid number of pools */
- if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+ if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
txgbe_rss_disable(dev);
return;
}
/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
- nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+ nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
/*
* split rx buffer up into sections, each for 1 traffic class
@@ -3083,7 +3083,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
}
/* zero alloc all unused TCs */
- for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
uint32_t rxpbsize = rd32(hw, TXGBE_PBRXSIZE(i));
rxpbsize &= (~(0x3FF << 10));
@@ -3091,7 +3091,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
}
- if (num_pools == ETH_16_POOLS) {
+ if (num_pools == RTE_ETH_16_POOLS) {
mrqc = TXGBE_PORTCTL_NUMTC_8;
mrqc |= TXGBE_PORTCTL_NUMVT_16;
} else {
@@ -3110,7 +3110,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_POOLCTL, vt_ctl);
queue_mapping = 0;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
/*
* mapping is done with 3 bits per priority,
* so shift by i*3 each time
@@ -3131,7 +3131,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
wr32(hw, TXGBE_VLANTBL(i), 0xFFFFFFFF);
wr32(hw, TXGBE_POOLRXENA(0),
- num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
wr32(hw, TXGBE_ETHADDRIDX, 0);
wr32(hw, TXGBE_ETHADDRASSL, 0xFFFFFFFF);
@@ -3201,7 +3201,7 @@ txgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
/*PF VF Transmit Enable*/
wr32(hw, TXGBE_POOLTXENA(0),
vmdq_tx_conf->nb_queue_pools ==
- ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+ RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
/*Configure general DCB TX parameters*/
txgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3217,12 +3217,12 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
- if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3232,7 +3232,7 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3250,12 +3250,12 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
uint8_t i, j;
/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
- if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
- dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+ if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
} else {
- dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
- dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+ dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+ dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
}
/* Initialize User Priority to Traffic Class mapping */
@@ -3265,7 +3265,7 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = vmdq_tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3292,7 +3292,7 @@ txgbe_dcb_rx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = rx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3319,7 +3319,7 @@ txgbe_dcb_tx_config(struct rte_eth_dev *dev,
}
/* User Priority to Traffic Class mapping */
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
j = tx_conf->dcb_tc[i];
tc = &dcb_config->tc_config[j];
tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3455,7 +3455,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(dev);
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
dcb_config->vt_mode = true;
config_dcb_rx = DCB_RX_CONFIG;
/*
@@ -3466,8 +3466,8 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
/*Configure general VMDQ and DCB RX parameters*/
txgbe_vmdq_dcb_configure(dev);
break;
- case ETH_MQ_RX_DCB:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_DCB_RSS:
dcb_config->vt_mode = false;
config_dcb_rx = DCB_RX_CONFIG;
/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -3480,7 +3480,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
break;
}
switch (dev->data->dev_conf.txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
+ case RTE_ETH_MQ_TX_VMDQ_DCB:
dcb_config->vt_mode = true;
config_dcb_tx = DCB_TX_CONFIG;
/* get DCB and VT TX configuration parameters
@@ -3491,7 +3491,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
txgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
break;
- case ETH_MQ_TX_DCB:
+ case RTE_ETH_MQ_TX_DCB:
dcb_config->vt_mode = false;
config_dcb_tx = DCB_TX_CONFIG;
/* get DCB TX configuration parameters from rte_eth_conf */
@@ -3507,15 +3507,15 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
nb_tcs = dcb_config->num_tcs.pfc_tcs;
/* Unpack map */
txgbe_dcb_unpack_map_cee(dcb_config, TXGBE_DCB_RX_CONFIG, map);
- if (nb_tcs == ETH_4_TCS) {
+ if (nb_tcs == RTE_ETH_4_TCS) {
/* Avoid un-configured priority mapping to TC0 */
uint8_t j = 4;
uint8_t mask = 0xFF;
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
mask = (uint8_t)(mask & (~(1 << map[i])));
for (i = 0; mask && (i < TXGBE_DCB_TC_MAX); i++) {
- if ((mask & 0x1) && j < ETH_DCB_NUM_USER_PRIORITIES)
+ if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
map[j++] = i;
mask >>= 1;
}
@@ -3556,7 +3556,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
/* zero alloc all unused TCs */
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
wr32(hw, TXGBE_PBRXSIZE(i), 0);
}
if (config_dcb_tx) {
@@ -3572,7 +3572,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
wr32(hw, TXGBE_PBTXDMATH(i), txpbthresh);
}
/* Clear unused TCs, if any, to zero buffer size*/
- for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
wr32(hw, TXGBE_PBTXSIZE(i), 0);
wr32(hw, TXGBE_PBTXDMATH(i), 0);
}
@@ -3614,7 +3614,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
txgbe_dcb_config_tc_stats_raptor(hw, dcb_config);
/* Check if the PFC is supported */
- if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+ if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
for (i = 0; i < nb_tcs; i++) {
/* If the TC count is 8,
@@ -3628,7 +3628,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
tc->pfc = txgbe_dcb_pfc_enabled;
}
txgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
- if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+ if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
pfc_en &= 0x0F;
ret = txgbe_dcb_config_pfc(hw, pfc_en, map);
}
@@ -3699,12 +3699,12 @@ void txgbe_configure_dcb(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* check support mq_mode for DCB */
- if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB &&
- dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB &&
- dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS)
+ if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+ dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
return;
- if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+ if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
return;
/** Configure DCB hardware **/
@@ -3760,7 +3760,7 @@ txgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
/* pool enabling for receive - 64 */
wr32(hw, TXGBE_POOLRXENA(0), UINT32_MAX);
- if (num_pools == ETH_64_POOLS)
+ if (num_pools == RTE_ETH_64_POOLS)
wr32(hw, TXGBE_POOLRXENA(1), UINT32_MAX);
/*
@@ -3884,11 +3884,11 @@ txgbe_config_vf_rss(struct rte_eth_dev *dev)
mrqc = rd32(hw, TXGBE_PORTCTL);
mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_32;
break;
@@ -3911,15 +3911,15 @@ txgbe_config_vf_default(struct rte_eth_dev *dev)
mrqc = rd32(hw, TXGBE_PORTCTL);
mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
switch (RTE_ETH_DEV_SRIOV(dev).active) {
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
mrqc |= TXGBE_PORTCTL_NUMVT_16;
break;
default:
@@ -3942,21 +3942,21 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* any DCB/RSS w/o VMDq multi-queue setting
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_DCB_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
txgbe_rss_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
txgbe_vmdq_dcb_configure(dev);
break;
- case ETH_MQ_RX_VMDQ_ONLY:
+ case RTE_ETH_MQ_RX_VMDQ_ONLY:
txgbe_vmdq_rx_hw_configure(dev);
break;
- case ETH_MQ_RX_NONE:
+ case RTE_ETH_MQ_RX_NONE:
default:
/* if mq_mode is none, disable rss mode.*/
txgbe_rss_disable(dev);
@@ -3967,18 +3967,18 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
* Support RSS together with SRIOV.
*/
switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- case ETH_MQ_RX_VMDQ_RSS:
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
txgbe_config_vf_rss(dev);
break;
- case ETH_MQ_RX_VMDQ_DCB:
- case ETH_MQ_RX_DCB:
+ case RTE_ETH_MQ_RX_VMDQ_DCB:
+ case RTE_ETH_MQ_RX_DCB:
/* In SRIOV, the configuration is the same as VMDq case */
txgbe_vmdq_dcb_configure(dev);
break;
/* DCB/RSS together with SRIOV is not supported */
- case ETH_MQ_RX_VMDQ_DCB_RSS:
- case ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
PMD_INIT_LOG(ERR,
"Could not support DCB/RSS with VMDq & SRIOV");
return -1;
@@ -4008,7 +4008,7 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV inactive scheme
* any DCB w/o VMDq multi-queue setting
*/
- if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+ if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
txgbe_vmdq_tx_hw_configure(hw);
else
wr32m(hw, TXGBE_PORTCTL, TXGBE_PORTCTL_NUMVT_MASK, 0);
@@ -4018,13 +4018,13 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
* SRIOV active scheme
* FIXME if support DCB together with VMDq & SRIOV
*/
- case ETH_64_POOLS:
+ case RTE_ETH_64_POOLS:
mtqc = TXGBE_PORTCTL_NUMVT_64;
break;
- case ETH_32_POOLS:
+ case RTE_ETH_32_POOLS:
mtqc = TXGBE_PORTCTL_NUMVT_32;
break;
- case ETH_16_POOLS:
+ case RTE_ETH_16_POOLS:
mtqc = TXGBE_PORTCTL_NUMVT_16;
break;
default:
@@ -4087,10 +4087,10 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
/* Sanity check */
dev->dev_ops->dev_infos_get(dev, &dev_info);
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
rsc_capable = true;
- if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
"support it");
return -EINVAL;
@@ -4098,22 +4098,22 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
/* RSC global configuration */
- if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
- (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+ if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+ (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
PMD_INIT_LOG(CRIT, "LRO can't be enabled when HW CRC "
"is disabled");
return -EINVAL;
}
rfctl = rd32(hw, TXGBE_PSRCTL);
- if (rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if (rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
rfctl &= ~TXGBE_PSRCTL_RSCDIA;
else
rfctl |= TXGBE_PSRCTL_RSCDIA;
wr32(hw, TXGBE_PSRCTL, rfctl);
/* If LRO hasn't been requested - we are done here. */
- if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+ if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
return 0;
/* Set PSRCTL.RSCACK bit */
@@ -4253,7 +4253,7 @@ txgbe_set_rx_function(struct rte_eth_dev *dev)
struct txgbe_rx_queue *rxq = dev->data->rx_queues[i];
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_SECURITY);
+ RTE_ETH_RX_OFFLOAD_SECURITY);
}
#endif
}
@@ -4296,7 +4296,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
* Configure CRC stripping, if any.
*/
hlreg0 = rd32(hw, TXGBE_SECRXCTL);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
hlreg0 &= ~TXGBE_SECRXCTL_CRCSTRIP;
else
hlreg0 |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4305,7 +4305,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
TXGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
} else {
@@ -4329,7 +4329,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Setup RX queues */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -4339,7 +4339,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
* Reset crc_len in case it was changed after queue setup by a
* call to configure.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
else
rxq->crc_len = 0;
@@ -4376,11 +4376,11 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
2 * TXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
- if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
dev->data->scattered_rx = 1;
/*
@@ -4395,7 +4395,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
*/
rxcsum = rd32(hw, TXGBE_PSRCTL);
rxcsum |= TXGBE_PSRCTL_PCSD;
- if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
rxcsum |= TXGBE_PSRCTL_L4CSUM;
else
rxcsum &= ~TXGBE_PSRCTL_L4CSUM;
@@ -4404,7 +4404,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
if (hw->mac.type == txgbe_mac_raptor) {
rdrxctl = rd32(hw, TXGBE_SECRXCTL);
- if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
rdrxctl &= ~TXGBE_SECRXCTL_CRCSTRIP;
else
rdrxctl |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4527,8 +4527,8 @@ txgbe_dev_rxtx_start(struct rte_eth_dev *dev)
txgbe_setup_loopback_link_raptor(hw);
#ifdef RTE_LIB_SECURITY
- if ((dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) ||
- (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_SECURITY)) {
+ if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) ||
+ (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY)) {
ret = txgbe_crypto_enable_ipsec(dev);
if (ret != 0) {
PMD_DRV_LOG(ERR,
@@ -4836,7 +4836,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
* Assume no header split and no VLAN strip support
* on any Rx queue first .
*/
- rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* Set PSR type for VF RSS according to max Rx queue */
psrtype = TXGBE_VFPLCFG_PSRL4HDR |
@@ -4888,7 +4888,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
*/
wr32(hw, TXGBE_RXCFG(i), srrctl);
- if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
(rxmode->max_rx_pkt_len +
2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
@@ -4897,8 +4897,8 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
dev->data->scattered_rx = 1;
}
- if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
}
/*
@@ -5069,7 +5069,7 @@ txgbe_config_rss_filter(struct rte_eth_dev *dev,
* little-endian order.
*/
reta = 0;
- for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+ for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
if (j == conf->conf.queue_num)
j = 0;
reta = (reta >> 8) | LS32(conf->conf.queue[j], 24, 0xFF);
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index b96f58a3f848..27d4c842c0e7 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -309,7 +309,7 @@ struct txgbe_rx_queue {
uint8_t rx_deferred_start; /**< not in global dev start. */
/** flags to set in mbuf when a vlan is detected. */
uint64_t vlan_flags;
- uint64_t offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+ uint64_t offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
struct rte_mbuf fake_mbuf;
/** hold packets to return to application */
@@ -392,7 +392,7 @@ struct txgbe_tx_queue {
uint8_t pthresh; /**< Prefetch threshold register. */
uint8_t hthresh; /**< Host threshold register. */
uint8_t wthresh; /**< Write-back threshold reg. */
- uint64_t offloads; /* Tx offload flags of DEV_TX_OFFLOAD_* */
+ uint64_t offloads; /* Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
uint32_t ctx_curr; /**< Hardware context states. */
/** Hardware context0 history. */
struct txgbe_ctx_info ctx_cache[TXGBE_CTX_NUM];
diff --git a/drivers/net/txgbe/txgbe_tm.c b/drivers/net/txgbe/txgbe_tm.c
index 3abe3959eb1a..3171be73d05d 100644
--- a/drivers/net/txgbe/txgbe_tm.c
+++ b/drivers/net/txgbe/txgbe_tm.c
@@ -118,14 +118,14 @@ txgbe_tc_nb_get(struct rte_eth_dev *dev)
uint8_t nb_tcs = 0;
eth_conf = &dev->data->dev_conf;
- if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
- } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ } else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
- ETH_32_POOLS)
- nb_tcs = ETH_4_TCS;
+ RTE_ETH_32_POOLS)
+ nb_tcs = RTE_ETH_4_TCS;
else
- nb_tcs = ETH_8_TCS;
+ nb_tcs = RTE_ETH_8_TCS;
} else {
nb_tcs = 1;
}
@@ -364,10 +364,10 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
if (vf_num) {
/* no DCB */
if (nb_tcs == 1) {
- if (vf_num >= ETH_32_POOLS) {
+ if (vf_num >= RTE_ETH_32_POOLS) {
*nb = 2;
*base = vf_num * 2;
- } else if (vf_num >= ETH_16_POOLS) {
+ } else if (vf_num >= RTE_ETH_16_POOLS) {
*nb = 4;
*base = vf_num * 4;
} else {
@@ -381,7 +381,7 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
}
} else {
/* VT off */
- if (nb_tcs == ETH_8_TCS) {
+ if (nb_tcs == RTE_ETH_8_TCS) {
switch (tc_node_no) {
case 0:
*base = 0;
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9aed..778460aab5e1 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -125,8 +125,8 @@ static pthread_mutex_t internal_list_lock = PTHREAD_MUTEX_INITIALIZER;
static struct rte_eth_link pmd_link = {
.link_speed = 10000,
- .link_duplex = ETH_LINK_FULL_DUPLEX,
- .link_status = ETH_LINK_DOWN
+ .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+ .link_status = RTE_ETH_LINK_DOWN
};
struct rte_vhost_vring_state {
@@ -823,7 +823,7 @@ new_device(int vid)
rte_vhost_get_mtu(vid, ð_dev->data->mtu);
- eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
rte_atomic32_set(&internal->dev_attached, 1);
update_queuing_status(eth_dev);
@@ -858,7 +858,7 @@ destroy_device(int vid)
rte_atomic32_set(&internal->dev_attached, 0);
update_queuing_status(eth_dev);
- eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
if (eth_dev->data->rx_queues && eth_dev->data->tx_queues) {
for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1124,7 +1124,7 @@ eth_dev_configure(struct rte_eth_dev *dev)
if (vhost_driver_setup(dev) < 0)
return -1;
- internal->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ internal->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
return 0;
}
@@ -1273,9 +1273,9 @@ eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_tx_queues = internal->max_queues;
dev_info->min_rx_bufsize = 0;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
return 0;
}
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index e58085a2c95a..00bbbb2b3537 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -703,7 +703,7 @@ int
virtio_dev_close(struct rte_eth_dev *dev)
{
struct virtio_hw *hw = dev->data->dev_private;
- struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+ struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
PMD_INIT_LOG(DEBUG, "virtio_dev_close");
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1763,7 +1763,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
hw->mac_addr[0], hw->mac_addr[1], hw->mac_addr[2],
hw->mac_addr[3], hw->mac_addr[4], hw->mac_addr[5]);
- if (hw->speed == ETH_SPEED_NUM_UNKNOWN) {
+ if (hw->speed == RTE_ETH_SPEED_NUM_UNKNOWN) {
if (virtio_with_feature(hw, VIRTIO_NET_F_SPEED_DUPLEX)) {
config = &local_config;
virtio_read_dev_config(hw,
@@ -1777,7 +1777,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
}
}
if (hw->duplex == DUPLEX_UNKNOWN)
- hw->duplex = ETH_LINK_FULL_DUPLEX;
+ hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
PMD_INIT_LOG(DEBUG, "link speed = %d, duplex = %d",
hw->speed, hw->duplex);
if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ)) {
@@ -1876,7 +1876,7 @@ int
eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
{
struct virtio_hw *hw = eth_dev->data->dev_private;
- uint32_t speed = ETH_SPEED_NUM_UNKNOWN;
+ uint32_t speed = RTE_ETH_SPEED_NUM_UNKNOWN;
int vectorized = 0;
int ret;
@@ -1948,22 +1948,22 @@ static uint32_t
virtio_dev_speed_capa_get(uint32_t speed)
{
switch (speed) {
- case ETH_SPEED_NUM_10G:
- return ETH_LINK_SPEED_10G;
- case ETH_SPEED_NUM_20G:
- return ETH_LINK_SPEED_20G;
- case ETH_SPEED_NUM_25G:
- return ETH_LINK_SPEED_25G;
- case ETH_SPEED_NUM_40G:
- return ETH_LINK_SPEED_40G;
- case ETH_SPEED_NUM_50G:
- return ETH_LINK_SPEED_50G;
- case ETH_SPEED_NUM_56G:
- return ETH_LINK_SPEED_56G;
- case ETH_SPEED_NUM_100G:
- return ETH_LINK_SPEED_100G;
- case ETH_SPEED_NUM_200G:
- return ETH_LINK_SPEED_200G;
+ case RTE_ETH_SPEED_NUM_10G:
+ return RTE_ETH_LINK_SPEED_10G;
+ case RTE_ETH_SPEED_NUM_20G:
+ return RTE_ETH_LINK_SPEED_20G;
+ case RTE_ETH_SPEED_NUM_25G:
+ return RTE_ETH_LINK_SPEED_25G;
+ case RTE_ETH_SPEED_NUM_40G:
+ return RTE_ETH_LINK_SPEED_40G;
+ case RTE_ETH_SPEED_NUM_50G:
+ return RTE_ETH_LINK_SPEED_50G;
+ case RTE_ETH_SPEED_NUM_56G:
+ return RTE_ETH_LINK_SPEED_56G;
+ case RTE_ETH_SPEED_NUM_100G:
+ return RTE_ETH_LINK_SPEED_100G;
+ case RTE_ETH_SPEED_NUM_200G:
+ return RTE_ETH_LINK_SPEED_200G;
default:
return 0;
}
@@ -2079,14 +2079,14 @@ virtio_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_LOG(DEBUG, "configure");
req_features = VIRTIO_PMD_DEFAULT_GUEST_FEATURES;
- if (rxmode->mq_mode != ETH_MQ_RX_NONE) {
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
PMD_DRV_LOG(ERR,
"Unsupported Rx multi queue mode %d",
rxmode->mq_mode);
return -EINVAL;
}
- if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
PMD_DRV_LOG(ERR,
"Unsupported Tx multi queue mode %d",
txmode->mq_mode);
@@ -2104,20 +2104,20 @@ virtio_dev_configure(struct rte_eth_dev *dev)
hw->max_rx_pkt_len = rxmode->max_rx_pkt_len;
- if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM))
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
req_features |= (1ULL << VIRTIO_NET_F_GUEST_CSUM);
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
req_features |=
(1ULL << VIRTIO_NET_F_GUEST_TSO4) |
(1ULL << VIRTIO_NET_F_GUEST_TSO6);
- if (tx_offloads & (DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM))
+ if (tx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM))
req_features |= (1ULL << VIRTIO_NET_F_CSUM);
- if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO)
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)
req_features |=
(1ULL << VIRTIO_NET_F_HOST_TSO4) |
(1ULL << VIRTIO_NET_F_HOST_TSO6);
@@ -2129,15 +2129,15 @@ virtio_dev_configure(struct rte_eth_dev *dev)
return ret;
}
- if ((rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM)) &&
+ if ((rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) &&
!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_CSUM)) {
PMD_DRV_LOG(ERR,
"rx checksum not available on this host");
return -ENOTSUP;
}
- if ((rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) &&
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
(!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO4) ||
!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO6))) {
PMD_DRV_LOG(ERR,
@@ -2149,12 +2149,12 @@ virtio_dev_configure(struct rte_eth_dev *dev)
if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
virtio_dev_cq_start(dev);
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
hw->vlan_strip = 1;
- hw->rx_ol_scatter = (rx_offloads & DEV_RX_OFFLOAD_SCATTER);
+ hw->rx_ol_scatter = (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
- if ((rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
PMD_DRV_LOG(ERR,
"vlan filtering not available on this host");
@@ -2207,7 +2207,7 @@ virtio_dev_configure(struct rte_eth_dev *dev)
hw->use_vec_rx = 0;
}
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
PMD_DRV_LOG(INFO,
"disabled packed ring vectorized rx for TCP_LRO enabled");
hw->use_vec_rx = 0;
@@ -2234,10 +2234,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
hw->use_vec_rx = 0;
}
- if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_TCP_LRO |
- DEV_RX_OFFLOAD_VLAN_STRIP)) {
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP)) {
PMD_DRV_LOG(INFO,
"disabled split ring vectorized rx for offloading enabled");
hw->use_vec_rx = 0;
@@ -2401,7 +2401,7 @@ virtio_dev_stop(struct rte_eth_dev *dev)
{
struct virtio_hw *hw = dev->data->dev_private;
struct rte_eth_link link;
- struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+ struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
PMD_INIT_LOG(DEBUG, "stop");
dev->data->dev_started = 0;
@@ -2440,28 +2440,28 @@ virtio_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complet
memset(&link, 0, sizeof(link));
link.link_duplex = hw->duplex;
link.link_speed = hw->speed;
- link.link_autoneg = ETH_LINK_AUTONEG;
+ link.link_autoneg = RTE_ETH_LINK_AUTONEG;
if (!hw->started) {
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
} else if (virtio_with_feature(hw, VIRTIO_NET_F_STATUS)) {
PMD_INIT_LOG(DEBUG, "Get link status from hw");
virtio_read_dev_config(hw,
offsetof(struct virtio_net_config, status),
&status, sizeof(status));
if ((status & VIRTIO_NET_S_LINK_UP) == 0) {
- link.link_status = ETH_LINK_DOWN;
- link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
PMD_INIT_LOG(DEBUG, "Port %d is down",
dev->data->port_id);
} else {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
PMD_INIT_LOG(DEBUG, "Port %d is up",
dev->data->port_id);
}
} else {
- link.link_status = ETH_LINK_UP;
+ link.link_status = RTE_ETH_LINK_UP;
}
return rte_eth_linkstatus_set(dev, &link);
@@ -2474,8 +2474,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
struct virtio_hw *hw = dev->data->dev_private;
uint64_t offloads = rxmode->offloads;
- if (mask & ETH_VLAN_FILTER_MASK) {
- if ((offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if ((offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
PMD_DRV_LOG(NOTICE,
@@ -2485,8 +2485,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
}
- if (mask & ETH_VLAN_STRIP_MASK)
- hw->vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ if (mask & RTE_ETH_VLAN_STRIP_MASK)
+ hw->vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
return 0;
}
@@ -2508,33 +2508,33 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mtu = hw->max_mtu;
host_features = VIRTIO_OPS(hw)->get_features(hw);
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SCATTER;
if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
dev_info->rx_offload_capa |=
- DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_UDP_CKSUM;
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
}
if (host_features & (1ULL << VIRTIO_NET_F_CTRL_VLAN))
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
tso_mask = (1ULL << VIRTIO_NET_F_GUEST_TSO4) |
(1ULL << VIRTIO_NET_F_GUEST_TSO6);
if ((host_features & tso_mask) == tso_mask)
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_LRO;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
- dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_VLAN_INSERT;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if (host_features & (1ULL << VIRTIO_NET_F_CSUM)) {
dev_info->tx_offload_capa |=
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM;
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
}
tso_mask = (1ULL << VIRTIO_NET_F_HOST_TSO4) |
(1ULL << VIRTIO_NET_F_HOST_TSO6);
if ((host_features & tso_mask) == tso_mask)
- dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+ dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
return 0;
}
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 1a3291273a11..825a6adfc2b1 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -41,21 +41,21 @@
#define VMXNET3_TX_MAX_SEG UINT8_MAX
#define VMXNET3_TX_OFFLOAD_CAP \
- (DEV_TX_OFFLOAD_VLAN_INSERT | \
- DEV_TX_OFFLOAD_TCP_CKSUM | \
- DEV_TX_OFFLOAD_UDP_CKSUM | \
- DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_MULTI_SEGS)
+ (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
#define VMXNET3_RX_OFFLOAD_CAP \
- (DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_LRO | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_RSS_HASH)
+ (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_SCATTER | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_LRO | \
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
int vmxnet3_segs_dynfield_offset = -1;
@@ -399,9 +399,9 @@ eth_vmxnet3_dev_init(struct rte_eth_dev *eth_dev)
/* set the initial link status */
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_10G;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
rte_eth_linkstatus_set(eth_dev, &link);
return 0;
@@ -487,8 +487,8 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
if (dev->data->nb_tx_queues > VMXNET3_MAX_TX_QUEUES ||
dev->data->nb_rx_queues > VMXNET3_MAX_RX_QUEUES) {
@@ -548,7 +548,7 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
hw->queueDescPA = mz->iova;
hw->queue_desc_len = (uint16_t)size;
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
/* Allocate memory structure for UPT1_RSSConf and configure */
mz = gpa_zone_reserve(dev, sizeof(struct VMXNET3_RSSConf),
"rss_conf", rte_socket_id(),
@@ -844,15 +844,15 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
devRead->rxFilterConf.rxMode = 0;
/* Setting up feature flags */
- if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
devRead->misc.uptFeatures |= VMXNET3_F_RXCSUM;
- if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
devRead->misc.uptFeatures |= VMXNET3_F_LRO;
devRead->misc.maxNumRxSG = 0;
}
- if (port_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ if (port_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
ret = vmxnet3_rss_configure(dev);
if (ret != VMXNET3_SUCCESS)
return ret;
@@ -864,7 +864,7 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
}
ret = vmxnet3_dev_vlan_offload_set(dev,
- ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+ RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
if (ret)
return ret;
@@ -931,7 +931,7 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
}
if (VMXNET3_VERSION_GE_4(hw) &&
- dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
/* Check for additional RSS */
ret = vmxnet3_v4_rss_configure(dev);
if (ret != VMXNET3_SUCCESS) {
@@ -1040,9 +1040,9 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
/* Clear recorded link status */
memset(&link, 0, sizeof(link));
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_10G;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
rte_eth_linkstatus_set(dev, &link);
hw->adapter_stopped = 1;
@@ -1372,7 +1372,7 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev,
dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
dev_info->min_mtu = VMXNET3_MIN_MTU;
dev_info->max_mtu = VMXNET3_MAX_MTU;
- dev_info->speed_capa = ETH_LINK_SPEED_10G;
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
dev_info->flow_type_rss_offloads = VMXNET3_RSS_OFFLOAD_ALL;
@@ -1454,10 +1454,10 @@ __vmxnet3_dev_link_update(struct rte_eth_dev *dev,
ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD);
if (ret & 0x1)
- link.link_status = ETH_LINK_UP;
- link.link_duplex = ETH_LINK_FULL_DUPLEX;
- link.link_speed = ETH_SPEED_NUM_10G;
- link.link_autoneg = ETH_LINK_FIXED;
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
return rte_eth_linkstatus_set(dev, &link);
}
@@ -1510,7 +1510,7 @@ vmxnet3_dev_promiscuous_disable(struct rte_eth_dev *dev)
uint32_t *vf_table = hw->shared->devRead.rxFilterConf.vfTable;
uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
else
memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
@@ -1580,8 +1580,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
uint32_t *vf_table = devRead->rxFilterConf.vfTable;
uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
- if (mask & ETH_VLAN_STRIP_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
devRead->misc.uptFeatures |= UPT1_F_RXVLAN;
else
devRead->misc.uptFeatures &= ~UPT1_F_RXVLAN;
@@ -1590,8 +1590,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
VMXNET3_CMD_UPDATE_FEATURE);
}
- if (mask & ETH_VLAN_FILTER_MASK) {
- if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
else
memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.h b/drivers/net/vmxnet3/vmxnet3_ethdev.h
index 59bee9723cfc..7588ba929b65 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.h
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.h
@@ -32,18 +32,18 @@
VMXNET3_MAX_RX_QUEUES + 1)
#define VMXNET3_RSS_OFFLOAD_ALL ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP)
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
#define VMXNET3_V4_RSS_MASK ( \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP)
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
#define VMXNET3_MANDATORY_V4_RSS ( \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP)
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP)
/* RSS configuration structure - shared with device through GPA */
typedef struct VMXNET3_RSSConf {
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 5cf53d4de825..0f2671f528f4 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -1326,13 +1326,13 @@ vmxnet3_v4_rss_configure(struct rte_eth_dev *dev)
rss_hf = port_rss_conf->rss_hf &
(VMXNET3_V4_RSS_MASK | VMXNET3_RSS_OFFLOAD_ALL);
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP6;
VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD,
@@ -1389,13 +1389,13 @@ vmxnet3_rss_configure(struct rte_eth_dev *dev)
/* loading hashType */
dev_rss_conf->hashType = 0;
rss_hf = port_rss_conf->rss_hf & VMXNET3_RSS_OFFLOAD_ALL;
- if (rss_hf & ETH_RSS_IPV4)
+ if (rss_hf & RTE_ETH_RSS_IPV4)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV4;
- if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV4;
- if (rss_hf & ETH_RSS_IPV6)
+ if (rss_hf & RTE_ETH_RSS_IPV6)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV6;
- if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+ if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV6;
return VMXNET3_SUCCESS;
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index 5251db0b1674..ecc6ef2965ee 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -71,12 +71,12 @@ mbuf_input(struct rte_mbuf *mbuf)
static const struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -334,7 +334,7 @@ check_port_link_status(uint16_t port_id)
if (link_get_err >= 0 && link.link_status) {
const char *dp = (link.link_duplex ==
- ETH_LINK_FULL_DUPLEX) ?
+ RTE_ETH_LINK_FULL_DUPLEX) ?
"full-duplex" : "half-duplex";
printf("\nPort %u Link Up - speed %s - %s\n",
port_id,
diff --git a/examples/bond/main.c b/examples/bond/main.c
index f48400e21156..e4c627e203a4 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -116,18 +116,18 @@ static struct rte_mempool *mbuf_pool;
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -151,9 +151,9 @@ slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-retval));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
@@ -243,9 +243,9 @@ bond_port_init(struct rte_mempool *mbuf_pool)
"Error during getting device (port %u) info: %s\n",
BOND_PORT, strerror(-retval));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
retval = rte_eth_dev_configure(BOND_PORT, 1, 1, &local_port_conf);
if (retval != 0)
rte_exit(EXIT_FAILURE, "port %u: configuration failed (res=%d)\n",
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index 1b1029660e77..e6af8420e4c6 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -80,16 +80,16 @@ struct app_stats prev_app_stats;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.rx_adv_conf = {
.rss_conf = {
- .rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
- ETH_RSS_TCP | ETH_RSS_SCTP,
+ .rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
}
},
};
@@ -127,9 +127,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/ethtool/ethtool-app/main.c b/examples/ethtool/ethtool-app/main.c
index 21ed85c7d6c9..5053d174335c 100644
--- a/examples/ethtool/ethtool-app/main.c
+++ b/examples/ethtool/ethtool-app/main.c
@@ -98,7 +98,7 @@ static void setup_ports(struct app_config *app_cfg, int cnt_ports)
int ret;
memset(&cfg_port, 0, sizeof(cfg_port));
- cfg_port.txmode.mq_mode = ETH_MQ_TX_NONE;
+ cfg_port.txmode.mq_mode = RTE_ETH_MQ_TX_NONE;
for (idx_port = 0; idx_port < cnt_ports; idx_port++) {
struct app_port *ptr_port = &app_cfg->ports[idx_port];
diff --git a/examples/ethtool/lib/rte_ethtool.c b/examples/ethtool/lib/rte_ethtool.c
index 413251630709..e7cdf8d5775b 100644
--- a/examples/ethtool/lib/rte_ethtool.c
+++ b/examples/ethtool/lib/rte_ethtool.c
@@ -233,13 +233,13 @@ rte_ethtool_get_pauseparam(uint16_t port_id,
pause_param->tx_pause = 0;
pause_param->rx_pause = 0;
switch (fc_conf.mode) {
- case RTE_FC_RX_PAUSE:
+ case RTE_ETH_FC_RX_PAUSE:
pause_param->rx_pause = 1;
break;
- case RTE_FC_TX_PAUSE:
+ case RTE_ETH_FC_TX_PAUSE:
pause_param->tx_pause = 1;
break;
- case RTE_FC_FULL:
+ case RTE_ETH_FC_FULL:
pause_param->rx_pause = 1;
pause_param->tx_pause = 1;
default:
@@ -277,14 +277,14 @@ rte_ethtool_set_pauseparam(uint16_t port_id,
if (pause_param->tx_pause) {
if (pause_param->rx_pause)
- fc_conf.mode = RTE_FC_FULL;
+ fc_conf.mode = RTE_ETH_FC_FULL;
else
- fc_conf.mode = RTE_FC_TX_PAUSE;
+ fc_conf.mode = RTE_ETH_FC_TX_PAUSE;
} else {
if (pause_param->rx_pause)
- fc_conf.mode = RTE_FC_RX_PAUSE;
+ fc_conf.mode = RTE_ETH_FC_RX_PAUSE;
else
- fc_conf.mode = RTE_FC_NONE;
+ fc_conf.mode = RTE_ETH_FC_NONE;
}
status = rte_eth_dev_flow_ctrl_set(port_id, &fc_conf);
@@ -398,12 +398,12 @@ rte_ethtool_net_set_rx_mode(uint16_t port_id)
for (vf = 0; vf < num_vfs; vf++) {
#ifdef RTE_NET_IXGBE
rte_pmd_ixgbe_set_vf_rxmode(port_id, vf,
- ETH_VMDQ_ACCEPT_UNTAG, 0);
+ RTE_ETH_VMDQ_ACCEPT_UNTAG, 0);
#endif
}
/* Enable Rx vlan filter, VF unspport status is discard */
- ret = rte_eth_dev_set_vlan_offload(port_id, ETH_VLAN_FILTER_MASK);
+ ret = rte_eth_dev_set_vlan_offload(port_id, RTE_ETH_VLAN_FILTER_MASK);
if (ret != 0)
return ret;
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index f70ab0cc9e38..3ac98add5692 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -283,14 +283,14 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
struct rte_eth_rxconf rx_conf;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
- .rss_hf = ETH_RSS_IP |
- ETH_RSS_TCP |
- ETH_RSS_UDP,
+ .rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_UDP,
}
}
};
@@ -312,12 +312,12 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_RSS_HASH)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_RSS_HASH)
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
rx_conf = dev_info.default_rxconf;
rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index ca6cd200caad..5780928d75ee 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -614,14 +614,14 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
struct rte_eth_rxconf rx_conf;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
- .rss_hf = ETH_RSS_IP |
- ETH_RSS_TCP |
- ETH_RSS_UDP,
+ .rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_UDP,
}
}
};
@@ -643,9 +643,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rx_conf = dev_info.default_rxconf;
rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index db71f5aa0401..f44ee65372ff 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -218,9 +218,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/flow_filtering/main.c b/examples/flow_filtering/main.c
index 29fb4b3d55ef..150406e385d4 100644
--- a/examples/flow_filtering/main.c
+++ b/examples/flow_filtering/main.c
@@ -113,7 +113,7 @@ assert_link_status(void)
memset(&link, 0, sizeof(link));
do {
link_get_err = rte_eth_link_get(port_id, &link);
- if (link_get_err == 0 && link.link_status == ETH_LINK_UP)
+ if (link_get_err == 0 && link.link_status == RTE_ETH_LINK_UP)
break;
rte_delay_ms(CHECK_INTERVAL);
} while (--rep_cnt);
@@ -121,7 +121,7 @@ assert_link_status(void)
if (link_get_err < 0)
rte_exit(EXIT_FAILURE, ":: error: link get is failing: %s\n",
rte_strerror(-link_get_err));
- if (link.link_status == ETH_LINK_DOWN)
+ if (link.link_status == RTE_ETH_LINK_DOWN)
rte_exit(EXIT_FAILURE, ":: error: link is still down\n");
}
@@ -138,12 +138,12 @@ init_port(void)
},
.txmode = {
.offloads =
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_UDP_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM |
- DEV_TX_OFFLOAD_TCP_TSO,
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO,
},
};
struct rte_eth_txconf txq_conf;
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index 0c413180f889..94e3ac91b299 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -819,13 +819,13 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
/* Configuring port to use RSS for multiple RX queues. 8< */
static const struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_PROTO_MASK,
+ .rss_hf = RTE_ETH_RSS_PROTO_MASK,
}
}
};
@@ -853,9 +853,9 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(portid, nb_queues, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Cannot configure device:"
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index f24536972084..aa41fcc1d037 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -148,14 +148,14 @@ static struct rte_eth_conf port_conf = {
.rxmode = {
.max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
.split_hdr_size = 0,
- .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ .offloads = (RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME),
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS),
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
},
};
@@ -624,7 +624,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 16bcffe356bc..8e974a8d0a92 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -45,7 +45,7 @@ link_next(struct link *link)
static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
.split_hdr_size = 0, /* Header split buffer size */
},
@@ -57,12 +57,12 @@ static struct rte_eth_conf port_conf_default = {
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
-#define RETA_CONF_SIZE (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE (RTE_ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
static int
rss_setup(uint16_t port_id,
@@ -139,7 +139,7 @@ link_create(const char *name, struct link_params *params)
rss = params->rx.rss;
if (rss) {
if ((port_info.reta_size == 0) ||
- (port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+ (port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
return NULL;
if ((rss->n_queues == 0) ||
@@ -157,9 +157,9 @@ link_create(const char *name, struct link_params *params)
/* Port */
memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
if (rss) {
- port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
port_conf.rx_adv_conf.rss_conf.rss_hf =
- (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+ (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
port_info.flow_type_rss_offloads;
}
@@ -267,5 +267,5 @@ link_is_up(const char *name)
if (rte_eth_link_get(link->port_id, &link_params) < 0)
return 0;
- return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+ return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
}
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8645ac790be4..8aabea002bbb 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -161,22 +161,22 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
.split_hdr_size = 0,
- .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ .offloads = (RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME),
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_MULTI_SEGS),
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
},
};
@@ -740,7 +740,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -1097,9 +1097,9 @@ main(int argc, char **argv)
n_tx_queue = nb_lcores;
if (n_tx_queue > MAX_TX_QUEUE_PER_PORT)
n_tx_queue = MAX_TX_QUEUE_PER_PORT;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index f252d34985b4..73932564e459 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -234,20 +234,20 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
- ETH_RSS_TCP | ETH_RSS_SCTP,
+ .rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -1456,10 +1456,10 @@ print_usage(const char *prgname)
" \"parallel\" : Parallel\n"
" --" CMD_LINE_OPT_RX_OFFLOAD
": bitmask of the RX HW offload capabilities to enable/use\n"
- " (DEV_RX_OFFLOAD_*)\n"
+ " (RTE_ETH_RX_OFFLOAD_*)\n"
" --" CMD_LINE_OPT_TX_OFFLOAD
": bitmask of the TX HW offload capabilities to enable/use\n"
- " (DEV_TX_OFFLOAD_*)\n"
+ " (RTE_ETH_TX_OFFLOAD_*)\n"
" --" CMD_LINE_OPT_REASSEMBLE " NUM"
": max number of entries in reassemble(fragment) table\n"
" (zero (default value) disables reassembly)\n"
@@ -1908,7 +1908,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2211,12 +2211,12 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
frame_size = MTU_TO_FRAMELEN(mtu_size);
if (frame_size > local_port_conf.rxmode.max_rx_pkt_len)
- local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ local_port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
local_port_conf.rxmode.max_rx_pkt_len = frame_size;
if (multi_seg_required()) {
- local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
- local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ local_port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+ local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
}
local_port_conf.rxmode.offloads |= req_rx_offloads;
@@ -2239,12 +2239,12 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
portid, local_port_conf.txmode.offloads,
dev_info.tx_offload_capa);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM)
- local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
+ local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
printf("port %u configurng rx_offloads=0x%" PRIx64
", tx_offloads=0x%" PRIx64 "\n",
@@ -2302,7 +2302,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
/* Pre-populate pkt offloads based on capabilities */
qconf->outbound.ipv4_offloads = PKT_TX_IPV4;
qconf->outbound.ipv6_offloads = PKT_TX_IPV6;
- if (local_port_conf.txmode.offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+ if (local_port_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
qconf->outbound.ipv4_offloads |= PKT_TX_IP_CKSUM;
tx_queueid++;
@@ -2663,7 +2663,7 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads)
struct rte_flow *flow;
int ret;
- if (!(rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+ if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
return;
/* Add the default rte_flow to enable SECURITY for all ESP packets */
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 17a28556c971..5cdd794f017f 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -986,7 +986,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
if (inbound) {
if ((dev_info.rx_offload_capa &
- DEV_RX_OFFLOAD_SECURITY) == 0) {
+ RTE_ETH_RX_OFFLOAD_SECURITY) == 0) {
RTE_LOG(WARNING, PORT,
"hardware RX IPSec offload is not supported\n");
return -EINVAL;
@@ -994,7 +994,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
} else { /* outbound */
if ((dev_info.tx_offload_capa &
- DEV_TX_OFFLOAD_SECURITY) == 0) {
+ RTE_ETH_TX_OFFLOAD_SECURITY) == 0) {
RTE_LOG(WARNING, PORT,
"hardware TX IPSec offload is not supported\n");
return -EINVAL;
@@ -1628,7 +1628,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
rule_type ==
RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
&& rule->portid == port_id)
- *rx_offloads |= DEV_RX_OFFLOAD_SECURITY;
+ *rx_offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
}
/* Check for outbound rules that use offloads and use this port */
@@ -1639,7 +1639,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
rule_type ==
RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
&& rule->portid == port_id)
- *tx_offloads |= DEV_TX_OFFLOAD_SECURITY;
+ *tx_offloads |= RTE_ETH_TX_OFFLOAD_SECURITY;
}
return 0;
}
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index cc527d7f6b38..96fb325ff180 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -112,11 +112,11 @@ static struct rte_eth_conf port_conf = {
.rxmode = {
.max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
+ .offloads = RTE_ETH_RX_OFFLOAD_JUMBO_FRAME,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
},
};
@@ -620,7 +620,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/kni/main.c b/examples/kni/main.c
index beabb3c848aa..81124dc0dc88 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -95,7 +95,7 @@ static struct kni_port_params *kni_port_params_array[RTE_MAX_ETHPORTS];
/* Options for configuring ethernet port */
static struct rte_eth_conf port_conf = {
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -608,9 +608,9 @@ init_port(uint16_t port)
"Error during getting device (port %u) info: %s\n",
port, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(port, 1, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Could not configure port%u (%d)\n",
@@ -688,7 +688,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -792,9 +792,9 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
memcpy(&conf, &port_conf, sizeof(conf));
/* Set new MTU */
if (new_mtu > RTE_ETHER_MAX_LEN)
- conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
else
- conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ conf.rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
/* mtu + length of header + length of FCS = max pkt length */
conf.rxmode.max_rx_pkt_len = new_mtu + KNI_ENET_HEADER_SIZE +
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 5f539c458cdd..89489843e2bd 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -216,12 +216,12 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -1809,7 +1809,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2633,9 +2633,9 @@ initialize_ports(struct l2fwd_crypto_options *options)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
retval = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (retval < 0) {
printf("Cannot configure device: err=%d, port=%u\n",
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index b8c1e02d7598..80a72f7095cf 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -15,7 +15,7 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
uint16_t nb_ports_available = 0;
@@ -23,9 +23,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
int ret;
if (rsrc->event_mode) {
- port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
- port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
+ port_conf.rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP;
}
/* Initialise each port */
@@ -61,9 +61,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
local_port_conf.rx_adv_conf.rss_conf.rss_hf);
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure RX and TX queue. 8< */
ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 1db89f2bd139..9806204b81d1 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -395,7 +395,7 @@ check_all_ports_link_status(struct l2fwd_resources *rsrc,
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
index bbb4a27a6d54..2e50339afb61 100644
--- a/examples/l2fwd-jobstats/main.c
+++ b/examples/l2fwd-jobstats/main.c
@@ -94,7 +94,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -726,7 +726,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -869,9 +869,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the RX and TX queues. 8< */
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c
index 4e1a17cfe4f5..d228a842788d 100644
--- a/examples/l2fwd-keepalive/main.c
+++ b/examples/l2fwd-keepalive/main.c
@@ -83,7 +83,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -478,7 +478,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -650,9 +650,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE,
diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index 911e40c66e0e..b4a69dde63dc 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -95,7 +95,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -606,7 +606,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -792,9 +792,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the number of queues for a port. */
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index a1f457b564b6..9323426e9b1d 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -124,20 +124,20 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
- ETH_RSS_TCP | ETH_RSS_SCTP,
+ .rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -1815,9 +1815,9 @@ parse_args(int argc, char **argv)
printf("jumbo frame is enabled\n");
port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/*
* if no max-pkt-len set, then use the
@@ -1970,7 +1970,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2080,9 +2080,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index a0de8ca9b42d..278fe95970f3 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -111,18 +111,18 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -494,8 +494,8 @@ parse_args(int argc, char **argv)
const struct option lenopts = {"max-pkt-len",
required_argument, 0, 0};
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+ port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/*
* if no max-pkt-len set, use the default
@@ -628,7 +628,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* Clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -807,9 +807,9 @@ main(int argc, char **argv)
nb_rx_queue, n_tx_queue);
rte_eth_dev_info_get(portid, &dev_info);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index aa7b8db44ae8..85609e9d4593 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -250,19 +250,19 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_UDP,
+ .rss_hf = RTE_ETH_RSS_UDP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
}
};
@@ -1961,9 +1961,9 @@ parse_args(int argc, char **argv)
printf("jumbo frame is enabled \n");
port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/**
* if no max-pkt-len set, use the default value
@@ -2222,7 +2222,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -2622,9 +2622,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 961860ea18ef..7c7613a83aad 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -75,9 +75,9 @@ l3fwd_eth_dev_port_setup(struct rte_eth_conf *port_conf)
rte_panic("Error during getting device (port %u) info:"
"%s\n", port_id, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 00ac267af1dd..500444565463 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -120,19 +120,19 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -703,8 +703,8 @@ parse_args(int argc, char **argv)
"max-pkt-len", required_argument, 0, 0
};
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+ port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/*
* if no max-pkt-len set, use the default
@@ -926,7 +926,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -1035,15 +1035,15 @@ l3fwd_poll_resource_setup(void)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
if (dev_info.max_rx_queues == 1)
- local_port_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+ local_port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
port_conf.rx_adv_conf.rss_conf.rss_hf) {
diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c
index 7470aa539a90..7c1214512983 100644
--- a/examples/link_status_interrupt/main.c
+++ b/examples/link_status_interrupt/main.c
@@ -83,7 +83,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.intr_conf = {
.lsc = 1, /**< lsc interrupt feature enabled */
@@ -147,7 +147,7 @@ print_stats(void)
link_get_err < 0 ? "0" :
rte_eth_link_speed_to_str(link.link_speed),
link_get_err < 0 ? "Link get failed" :
- (link.link_duplex == ETH_LINK_FULL_DUPLEX ? \
+ (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ? \
"full-duplex" : "half-duplex"),
port_statistics[portid].tx,
port_statistics[portid].rx,
@@ -507,7 +507,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -634,9 +634,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure RX and TX queues. 8< */
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
diff --git a/examples/multi_process/client_server_mp/mp_server/init.c b/examples/multi_process/client_server_mp/mp_server/init.c
index 1ad71ca7ec5f..23307073c904 100644
--- a/examples/multi_process/client_server_mp/mp_server/init.c
+++ b/examples/multi_process/client_server_mp/mp_server/init.c
@@ -94,7 +94,7 @@ init_port(uint16_t port_num)
/* for port configuration all features are off by default */
const struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS
+ .mq_mode = RTE_ETH_MQ_RX_RSS
}
};
const uint16_t rx_rings = 1, tx_rings = num_clients;
@@ -213,7 +213,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/multi_process/symmetric_mp/main.c b/examples/multi_process/symmetric_mp/main.c
index 01dc3acf34d5..85955375f1bf 100644
--- a/examples/multi_process/symmetric_mp/main.c
+++ b/examples/multi_process/symmetric_mp/main.c
@@ -176,18 +176,18 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
{
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
}
};
const uint16_t rx_rings = num_queues, tx_rings = num_queues;
@@ -218,9 +218,9 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
info.default_rxconf.rx_drop_en = 1;
- if (info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
port_conf.rx_adv_conf.rss_conf.rss_hf &= info.flow_type_rss_offloads;
@@ -392,7 +392,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
index e9a388710647..f110fc129f55 100644
--- a/examples/ntb/ntb_fwd.c
+++ b/examples/ntb/ntb_fwd.c
@@ -89,17 +89,17 @@ static uint16_t pkt_burst = NTB_DFLT_PKT_BURST;
static struct rte_eth_conf eth_port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.split_hdr_size = 0,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c
index d2fe9f6b50d8..eb15899c902f 100644
--- a/examples/packet_ordering/main.c
+++ b/examples/packet_ordering/main.c
@@ -294,9 +294,9 @@ configure_eth_port(uint16_t port_id)
return ret;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(port_id, rxRings, txRings, &port_conf);
if (ret != 0)
return ret;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 2f593abf263d..86671655b432 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -307,19 +307,19 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_TCP,
+ .rss_hf = RTE_ETH_RSS_TCP,
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -2988,9 +2988,9 @@ parse_args(int argc, char **argv)
printf("jumbo frame is enabled - disabling simple TX path\n");
port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* if no max-pkt-len set, use the default value
* RTE_ETHER_MAX_LEN
@@ -3466,7 +3466,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
@@ -3577,9 +3577,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 467cda5a6dac..2e68a3870a09 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -133,7 +133,7 @@ mempool_find(struct obj *obj, const char *name)
static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
- .mq_mode = ETH_MQ_RX_NONE,
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
.max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
.split_hdr_size = 0, /* Header split buffer size */
},
@@ -145,12 +145,12 @@ static struct rte_eth_conf port_conf_default = {
},
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.lpbk_mode = 0,
};
-#define RETA_CONF_SIZE (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE (RTE_ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
static int
rss_setup(uint16_t port_id,
@@ -227,7 +227,7 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
rss = params->rx.rss;
if (rss) {
if ((port_info.reta_size == 0) ||
- (port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+ (port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
return NULL;
if ((rss->n_queues == 0) ||
@@ -245,9 +245,9 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
/* Port */
memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
if (rss) {
- port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
port_conf.rx_adv_conf.rss_conf.rss_hf =
- (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+ (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
port_info.flow_type_rss_offloads;
}
@@ -356,7 +356,7 @@ link_is_up(struct obj *obj, const char *name)
if (rte_eth_link_get(link->port_id, &link_params) < 0)
return 0;
- return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+ return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
}
struct link *
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 4f32ade7fbf7..db32b0d6c427 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -197,14 +197,14 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Force full Tx path in the driver, required for IEEE1588 */
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index 7ffccc8369dc..5ef14c176b11 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -51,19 +51,19 @@ static struct rte_mempool *pool = NULL;
***/
static struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
+ .rss_hf = RTE_ETH_RSS_IP,
},
},
.txmode = {
- .mq_mode = ETH_DCB_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -333,8 +333,8 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
port_rx, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
- conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+ conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
if (conf.rx_adv_conf.rss_conf.rss_hf !=
@@ -379,8 +379,8 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
port_tx, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
- conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+ conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
if (conf.rx_adv_conf.rss_conf.rss_hf !=
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1abe003fc6ae..e750928fb89d 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -61,7 +61,7 @@ static struct rte_eth_conf port_conf = {
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_DCB_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
};
@@ -106,9 +106,9 @@ app_init_port(uint16_t portid, struct rte_mempool *mp)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE,
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index 6f20f98b2b30..08df716dc0fb 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -145,17 +145,17 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
if (hw_timestamping) {
- if (!(dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)) {
+ if (!(dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
printf("\nERROR: Port %u does not support hardware timestamping\n"
, port);
return -1;
}
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
rte_mbuf_dyn_rx_timestamp_register(&hwts_dynfield_offset, NULL);
if (hwts_dynfield_offset < 0) {
printf("ERROR: Failed to register timestamp field\n");
diff --git a/examples/server_node_efd/server/init.c b/examples/server_node_efd/server/init.c
index 9ebd88bac20e..074fee5b26b2 100644
--- a/examples/server_node_efd/server/init.c
+++ b/examples/server_node_efd/server/init.c
@@ -96,7 +96,7 @@ init_port(uint16_t port_num)
/* for port configuration all features are off by default */
struct rte_eth_conf port_conf = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
+ .mq_mode = RTE_ETH_MQ_RX_RSS,
},
};
const uint16_t rx_rings = 1, tx_rings = num_nodes;
@@ -115,9 +115,9 @@ init_port(uint16_t port_num)
if (retval != 0)
return retval;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/*
* Standard DPDK port initialisation - config port, then set up
@@ -277,7 +277,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index ae08261befd7..737df4ca2a17 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -55,9 +55,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index bc3d71c8984e..b1d363ae21db 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -109,23 +109,23 @@ static int nb_sockets;
/* empty vmdq configuration structure. Filled in programatically */
static struct rte_eth_conf vmdq_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_VMDQ_ONLY,
+ .mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY,
.split_hdr_size = 0,
/*
* VLAN strip is necessary for 1G NIC such as I350,
* this fixes bug of ipv4 forwarding in guest can't
* forward pakets from one virtio dev to another virtio dev.
*/
- .offloads = DEV_RX_OFFLOAD_VLAN_STRIP,
+ .offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- .offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
- DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_MULTI_SEGS |
- DEV_TX_OFFLOAD_TCP_TSO),
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO),
},
.rx_adv_conf = {
/*
@@ -133,7 +133,7 @@ static struct rte_eth_conf vmdq_conf_default = {
* appropriate values
*/
.vmdq_rx_conf = {
- .nb_queue_pools = ETH_8_POOLS,
+ .nb_queue_pools = RTE_ETH_8_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -290,9 +290,9 @@ port_init(uint16_t port)
return -1;
rx_rings = (uint16_t)dev_info.max_rx_queues;
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
if (retval != 0) {
@@ -562,8 +562,8 @@ us_vhost_parse_args(int argc, char **argv)
case 'P':
promiscuous = 1;
vmdq_conf_default.rx_adv_conf.vmdq_rx_conf.rx_mode =
- ETH_VMDQ_ACCEPT_BROADCAST |
- ETH_VMDQ_ACCEPT_MULTICAST;
+ RTE_ETH_VMDQ_ACCEPT_BROADCAST |
+ RTE_ETH_VMDQ_ACCEPT_MULTICAST;
break;
case OPT_VM2VM_NUM:
@@ -638,7 +638,7 @@ us_vhost_parse_args(int argc, char **argv)
mergeable = !!ret;
if (ret) {
vmdq_conf_default.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
vmdq_conf_default.rxmode.max_rx_pkt_len
= JUMBO_FRAME_MAX_SIZE;
}
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index 7d5bf6855426..dddcde40efe2 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -78,9 +78,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
@@ -278,7 +278,7 @@ check_all_ports_link_status(uint32_t port_mask)
continue;
}
/* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
all_ports_up = 0;
break;
}
diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
index d3bc19f78ee5..16782a5d850f 100644
--- a/examples/vmdq/main.c
+++ b/examples/vmdq/main.c
@@ -66,12 +66,12 @@ static uint8_t rss_enable;
/* empty vmdq configuration structure. Filled in programatically */
static const struct rte_eth_conf vmdq_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_VMDQ_ONLY,
+ .mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
},
.rx_adv_conf = {
/*
@@ -79,7 +79,7 @@ static const struct rte_eth_conf vmdq_conf_default = {
* appropriate values
*/
.vmdq_rx_conf = {
- .nb_queue_pools = ETH_8_POOLS,
+ .nb_queue_pools = RTE_ETH_8_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -157,11 +157,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_pools)
(void)(rte_memcpy(ð_conf->rx_adv_conf.vmdq_rx_conf, &conf,
sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf)));
if (rss_enable) {
- eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
- eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
- ETH_RSS_UDP |
- ETH_RSS_TCP |
- ETH_RSS_SCTP;
+ eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
+ eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_SCTP;
}
return 0;
}
@@ -259,9 +259,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
retval = rte_eth_dev_configure(port, rxRings, txRings, &port_conf);
if (retval != 0)
return retval;
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index 685a03bdd194..f58625a76227 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -60,8 +60,8 @@ static uint16_t ports[RTE_MAX_ETHPORTS];
static unsigned num_ports;
/* number of pools (if user does not specify any, 32 by default */
-static enum rte_eth_nb_pools num_pools = ETH_32_POOLS;
-static enum rte_eth_nb_tcs num_tcs = ETH_4_TCS;
+static enum rte_eth_nb_pools num_pools = RTE_ETH_32_POOLS;
+static enum rte_eth_nb_tcs num_tcs = RTE_ETH_4_TCS;
static uint16_t num_queues, num_vmdq_queues;
static uint16_t vmdq_pool_base, vmdq_queue_base;
static uint8_t rss_enable;
@@ -69,11 +69,11 @@ static uint8_t rss_enable;
/* Empty vmdq+dcb configuration structure. Filled in programmatically. 8< */
static const struct rte_eth_conf vmdq_dcb_conf_default = {
.rxmode = {
- .mq_mode = ETH_MQ_RX_VMDQ_DCB,
+ .mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB,
.split_hdr_size = 0,
},
.txmode = {
- .mq_mode = ETH_MQ_TX_VMDQ_DCB,
+ .mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB,
},
/*
* should be overridden separately in code with
@@ -81,7 +81,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
*/
.rx_adv_conf = {
.vmdq_dcb_conf = {
- .nb_queue_pools = ETH_32_POOLS,
+ .nb_queue_pools = RTE_ETH_32_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -89,12 +89,12 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
.dcb_tc = {0},
},
.dcb_rx_conf = {
- .nb_tcs = ETH_4_TCS,
+ .nb_tcs = RTE_ETH_4_TCS,
/** Traffic class each UP mapped to. */
.dcb_tc = {0},
},
.vmdq_rx_conf = {
- .nb_queue_pools = ETH_32_POOLS,
+ .nb_queue_pools = RTE_ETH_32_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
@@ -103,7 +103,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
},
.tx_adv_conf = {
.vmdq_dcb_tx_conf = {
- .nb_queue_pools = ETH_32_POOLS,
+ .nb_queue_pools = RTE_ETH_32_POOLS,
.dcb_tc = {0},
},
},
@@ -157,7 +157,7 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
conf.pool_map[i].pools = 1UL << i;
vmdq_conf.pool_map[i].pools = 1UL << i;
}
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
+ for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++){
conf.dcb_tc[i] = i % num_tcs;
dcb_conf.dcb_tc[i] = i % num_tcs;
tx_conf.dcb_tc[i] = i % num_tcs;
@@ -173,11 +173,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
(void)(rte_memcpy(ð_conf->tx_adv_conf.vmdq_dcb_tx_conf, &tx_conf,
sizeof(tx_conf)));
if (rss_enable) {
- eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
- eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
- ETH_RSS_UDP |
- ETH_RSS_TCP |
- ETH_RSS_SCTP;
+ eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
+ eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+ RTE_ETH_RSS_UDP |
+ RTE_ETH_RSS_TCP |
+ RTE_ETH_RSS_SCTP;
}
return 0;
}
@@ -271,9 +271,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
return retval;
}
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
port_conf.rx_adv_conf.rss_conf.rss_hf &=
@@ -390,9 +390,9 @@ vmdq_parse_num_pools(const char *q_arg)
if (n != 16 && n != 32)
return -1;
if (n == 16)
- num_pools = ETH_16_POOLS;
+ num_pools = RTE_ETH_16_POOLS;
else
- num_pools = ETH_32_POOLS;
+ num_pools = RTE_ETH_32_POOLS;
return 0;
}
@@ -412,9 +412,9 @@ vmdq_parse_num_tcs(const char *q_arg)
if (n != 4 && n != 8)
return -1;
if (n == 4)
- num_tcs = ETH_4_TCS;
+ num_tcs = RTE_ETH_4_TCS;
else
- num_tcs = ETH_8_TCS;
+ num_tcs = RTE_ETH_8_TCS;
return 0;
}
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 9d95cd11e1b5..2be877d048cf 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -98,9 +98,6 @@ static const struct rte_eth_xstats_name_off eth_dev_txq_stats_strings[] = {
#define RTE_NB_TXQ_STATS RTE_DIM(eth_dev_txq_stats_strings)
#define RTE_RX_OFFLOAD_BIT2STR(_name) \
- { DEV_RX_OFFLOAD_##_name, #_name }
-
-#define RTE_ETH_RX_OFFLOAD_BIT2STR(_name) \
{ RTE_ETH_RX_OFFLOAD_##_name, #_name }
static const struct {
@@ -126,14 +123,14 @@ static const struct {
RTE_RX_OFFLOAD_BIT2STR(SCTP_CKSUM),
RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
- RTE_ETH_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
+ RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
};
#undef RTE_RX_OFFLOAD_BIT2STR
#undef RTE_ETH_RX_OFFLOAD_BIT2STR
#define RTE_TX_OFFLOAD_BIT2STR(_name) \
- { DEV_TX_OFFLOAD_##_name, #_name }
+ { RTE_ETH_TX_OFFLOAD_##_name, #_name }
static const struct {
uint64_t offload;
@@ -1184,32 +1181,32 @@ uint32_t
rte_eth_speed_bitflag(uint32_t speed, int duplex)
{
switch (speed) {
- case ETH_SPEED_NUM_10M:
- return duplex ? ETH_LINK_SPEED_10M : ETH_LINK_SPEED_10M_HD;
- case ETH_SPEED_NUM_100M:
- return duplex ? ETH_LINK_SPEED_100M : ETH_LINK_SPEED_100M_HD;
- case ETH_SPEED_NUM_1G:
- return ETH_LINK_SPEED_1G;
- case ETH_SPEED_NUM_2_5G:
- return ETH_LINK_SPEED_2_5G;
- case ETH_SPEED_NUM_5G:
- return ETH_LINK_SPEED_5G;
- case ETH_SPEED_NUM_10G:
- return ETH_LINK_SPEED_10G;
- case ETH_SPEED_NUM_20G:
- return ETH_LINK_SPEED_20G;
- case ETH_SPEED_NUM_25G:
- return ETH_LINK_SPEED_25G;
- case ETH_SPEED_NUM_40G:
- return ETH_LINK_SPEED_40G;
- case ETH_SPEED_NUM_50G:
- return ETH_LINK_SPEED_50G;
- case ETH_SPEED_NUM_56G:
- return ETH_LINK_SPEED_56G;
- case ETH_SPEED_NUM_100G:
- return ETH_LINK_SPEED_100G;
- case ETH_SPEED_NUM_200G:
- return ETH_LINK_SPEED_200G;
+ case RTE_ETH_SPEED_NUM_10M:
+ return duplex ? RTE_ETH_LINK_SPEED_10M : RTE_ETH_LINK_SPEED_10M_HD;
+ case RTE_ETH_SPEED_NUM_100M:
+ return duplex ? RTE_ETH_LINK_SPEED_100M : RTE_ETH_LINK_SPEED_100M_HD;
+ case RTE_ETH_SPEED_NUM_1G:
+ return RTE_ETH_LINK_SPEED_1G;
+ case RTE_ETH_SPEED_NUM_2_5G:
+ return RTE_ETH_LINK_SPEED_2_5G;
+ case RTE_ETH_SPEED_NUM_5G:
+ return RTE_ETH_LINK_SPEED_5G;
+ case RTE_ETH_SPEED_NUM_10G:
+ return RTE_ETH_LINK_SPEED_10G;
+ case RTE_ETH_SPEED_NUM_20G:
+ return RTE_ETH_LINK_SPEED_20G;
+ case RTE_ETH_SPEED_NUM_25G:
+ return RTE_ETH_LINK_SPEED_25G;
+ case RTE_ETH_SPEED_NUM_40G:
+ return RTE_ETH_LINK_SPEED_40G;
+ case RTE_ETH_SPEED_NUM_50G:
+ return RTE_ETH_LINK_SPEED_50G;
+ case RTE_ETH_SPEED_NUM_56G:
+ return RTE_ETH_LINK_SPEED_56G;
+ case RTE_ETH_SPEED_NUM_100G:
+ return RTE_ETH_LINK_SPEED_100G;
+ case RTE_ETH_SPEED_NUM_200G:
+ return RTE_ETH_LINK_SPEED_200G;
default:
return 0;
}
@@ -1458,7 +1455,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
* If jumbo frames are enabled, check that the maximum RX packet
* length is supported by the configured device.
*/
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
RTE_ETHDEV_LOG(ERR,
"Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
@@ -1491,7 +1488,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
if (dev_conf->rxmode.max_lro_pkt_size == 0)
dev->data->dev_conf.rxmode.max_lro_pkt_size =
dev->data->dev_conf.rxmode.max_rx_pkt_len;
@@ -1543,12 +1540,12 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
/* Check if Rx RSS distribution is disabled but RSS hash is enabled. */
- if (((dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) == 0) &&
- (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+ if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
+ (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
RTE_ETHDEV_LOG(ERR,
"Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested\n",
port_id,
- rte_eth_dev_rx_offload_name(DEV_RX_OFFLOAD_RSS_HASH));
+ rte_eth_dev_rx_offload_name(RTE_ETH_RX_OFFLOAD_RSS_HASH));
ret = -EINVAL;
goto rollback;
}
@@ -2157,7 +2154,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
- if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (local_conf.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
dev->data->dev_conf.rxmode.max_lro_pkt_size =
dev->data->dev_conf.rxmode.max_rx_pkt_len;
@@ -2752,21 +2749,21 @@ const char *
rte_eth_link_speed_to_str(uint32_t link_speed)
{
switch (link_speed) {
- case ETH_SPEED_NUM_NONE: return "None";
- case ETH_SPEED_NUM_10M: return "10 Mbps";
- case ETH_SPEED_NUM_100M: return "100 Mbps";
- case ETH_SPEED_NUM_1G: return "1 Gbps";
- case ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
- case ETH_SPEED_NUM_5G: return "5 Gbps";
- case ETH_SPEED_NUM_10G: return "10 Gbps";
- case ETH_SPEED_NUM_20G: return "20 Gbps";
- case ETH_SPEED_NUM_25G: return "25 Gbps";
- case ETH_SPEED_NUM_40G: return "40 Gbps";
- case ETH_SPEED_NUM_50G: return "50 Gbps";
- case ETH_SPEED_NUM_56G: return "56 Gbps";
- case ETH_SPEED_NUM_100G: return "100 Gbps";
- case ETH_SPEED_NUM_200G: return "200 Gbps";
- case ETH_SPEED_NUM_UNKNOWN: return "Unknown";
+ case RTE_ETH_SPEED_NUM_NONE: return "None";
+ case RTE_ETH_SPEED_NUM_10M: return "10 Mbps";
+ case RTE_ETH_SPEED_NUM_100M: return "100 Mbps";
+ case RTE_ETH_SPEED_NUM_1G: return "1 Gbps";
+ case RTE_ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
+ case RTE_ETH_SPEED_NUM_5G: return "5 Gbps";
+ case RTE_ETH_SPEED_NUM_10G: return "10 Gbps";
+ case RTE_ETH_SPEED_NUM_20G: return "20 Gbps";
+ case RTE_ETH_SPEED_NUM_25G: return "25 Gbps";
+ case RTE_ETH_SPEED_NUM_40G: return "40 Gbps";
+ case RTE_ETH_SPEED_NUM_50G: return "50 Gbps";
+ case RTE_ETH_SPEED_NUM_56G: return "56 Gbps";
+ case RTE_ETH_SPEED_NUM_100G: return "100 Gbps";
+ case RTE_ETH_SPEED_NUM_200G: return "200 Gbps";
+ case RTE_ETH_SPEED_NUM_UNKNOWN: return "Unknown";
default: return "Invalid";
}
}
@@ -2790,14 +2787,14 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
return -EINVAL;
}
- if (eth_link->link_status == ETH_LINK_DOWN)
+ if (eth_link->link_status == RTE_ETH_LINK_DOWN)
return snprintf(str, len, "Link down");
else
return snprintf(str, len, "Link up at %s %s %s",
rte_eth_link_speed_to_str(eth_link->link_speed),
- (eth_link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ (eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
"FDX" : "HDX",
- (eth_link->link_autoneg == ETH_LINK_AUTONEG) ?
+ (eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
"Autoneg" : "Fixed");
}
@@ -3663,7 +3660,7 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on)
dev = &rte_eth_devices[port_id];
if (!(dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_VLAN_FILTER)) {
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
RTE_ETHDEV_LOG(ERR, "Port %u: vlan-filtering disabled\n",
port_id);
return -ENOSYS;
@@ -3750,44 +3747,44 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask)
dev_offloads = orig_offloads;
/* check which option changed by application */
- cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+ cur = !!(offload_mask & RTE_ETH_VLAN_STRIP_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
- mask |= ETH_VLAN_STRIP_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+ mask |= RTE_ETH_VLAN_STRIP_MASK;
}
- cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+ cur = !!(offload_mask & RTE_ETH_VLAN_FILTER_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
- mask |= ETH_VLAN_FILTER_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+ mask |= RTE_ETH_VLAN_FILTER_MASK;
}
- cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND);
+ cur = !!(offload_mask & RTE_ETH_VLAN_EXTEND_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
- mask |= ETH_VLAN_EXTEND_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
+ mask |= RTE_ETH_VLAN_EXTEND_MASK;
}
- cur = !!(offload_mask & ETH_QINQ_STRIP_OFFLOAD);
- org = !!(dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP);
+ cur = !!(offload_mask & RTE_ETH_QINQ_STRIP_OFFLOAD);
+ org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP);
if (cur != org) {
if (cur)
- dev_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ dev_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
else
- dev_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
- mask |= ETH_QINQ_STRIP_MASK;
+ dev_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
+ mask |= RTE_ETH_QINQ_STRIP_MASK;
}
/*no change*/
@@ -3832,17 +3829,17 @@ rte_eth_dev_get_vlan_offload(uint16_t port_id)
dev = &rte_eth_devices[port_id];
dev_offloads = &dev->data->dev_conf.rxmode.offloads;
- if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
- ret |= ETH_VLAN_STRIP_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ ret |= RTE_ETH_VLAN_STRIP_OFFLOAD;
- if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
- ret |= ETH_VLAN_FILTER_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+ ret |= RTE_ETH_VLAN_FILTER_OFFLOAD;
- if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
- ret |= ETH_VLAN_EXTEND_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
+ ret |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
- if (*dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
- ret |= ETH_QINQ_STRIP_OFFLOAD;
+ if (*dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+ ret |= RTE_ETH_QINQ_STRIP_OFFLOAD;
return ret;
}
@@ -3919,7 +3916,7 @@ rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id,
return -EINVAL;
}
- if (pfc_conf->priority > (ETH_DCB_NUM_USER_PRIORITIES - 1)) {
+ if (pfc_conf->priority > (RTE_ETH_DCB_NUM_USER_PRIORITIES - 1)) {
RTE_ETHDEV_LOG(ERR, "Invalid priority, only 0-7 allowed\n");
return -EINVAL;
}
@@ -4116,7 +4113,7 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id,
return -EINVAL;
}
- if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+ if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
return -EINVAL;
}
@@ -4142,7 +4139,7 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id,
return -EINVAL;
}
- if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+ if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
return -EINVAL;
}
@@ -4283,8 +4280,8 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
port_id);
return -EINVAL;
}
- if (pool >= ETH_64_POOLS) {
- RTE_ETHDEV_LOG(ERR, "Pool id must be 0-%d\n", ETH_64_POOLS - 1);
+ if (pool >= RTE_ETH_64_POOLS) {
+ RTE_ETHDEV_LOG(ERR, "Pool id must be 0-%d\n", RTE_ETH_64_POOLS - 1);
return -EINVAL;
}
@@ -4548,21 +4545,21 @@ rte_eth_mirror_rule_set(uint16_t port_id,
return -EINVAL;
}
- if (mirror_conf->dst_pool >= ETH_64_POOLS) {
+ if (mirror_conf->dst_pool >= RTE_ETH_64_POOLS) {
RTE_ETHDEV_LOG(ERR, "Invalid dst pool, pool id must be 0-%d\n",
- ETH_64_POOLS - 1);
+ RTE_ETH_64_POOLS - 1);
return -EINVAL;
}
- if ((mirror_conf->rule_type & (ETH_MIRROR_VIRTUAL_POOL_UP |
- ETH_MIRROR_VIRTUAL_POOL_DOWN)) &&
+ if ((mirror_conf->rule_type & (RTE_ETH_MIRROR_VIRTUAL_POOL_UP |
+ RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN)) &&
(mirror_conf->pool_mask == 0)) {
RTE_ETHDEV_LOG(ERR,
"Invalid mirror pool, pool mask can not be 0\n");
return -EINVAL;
}
- if ((mirror_conf->rule_type & ETH_MIRROR_VLAN) &&
+ if ((mirror_conf->rule_type & RTE_ETH_MIRROR_VLAN) &&
mirror_conf->vlan.vlan_mask == 0) {
RTE_ETHDEV_LOG(ERR,
"Invalid vlan mask, vlan mask can not be 0\n");
@@ -6238,7 +6235,7 @@ eth_dev_handle_port_link_status(const char *cmd __rte_unused,
rte_tel_data_add_dict_string(d, status_str, "UP");
rte_tel_data_add_dict_u64(d, "speed", link.link_speed);
rte_tel_data_add_dict_string(d, "duplex",
- (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
"full-duplex" : "half-duplex");
return 0;
}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index d2b27c351fdb..3e4109491316 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -249,7 +249,7 @@ void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter);
* field is not supported, its value is 0.
* All byte-related statistics do not include Ethernet FCS regardless
* of whether these bytes have been delivered to the application
- * (see DEV_RX_OFFLOAD_KEEP_CRC).
+ * (see RTE_ETH_RX_OFFLOAD_KEEP_CRC).
*/
struct rte_eth_stats {
uint64_t ipackets; /**< Total number of successfully received packets. */
@@ -279,42 +279,74 @@ struct rte_eth_stats {
/**
* Device supported speeds bitmap flags
*/
-#define ETH_LINK_SPEED_AUTONEG (0 << 0) /**< Autonegotiate (all speeds) */
-#define ETH_LINK_SPEED_FIXED (1 << 0) /**< Disable autoneg (fixed speed) */
-#define ETH_LINK_SPEED_10M_HD (1 << 1) /**< 10 Mbps half-duplex */
-#define ETH_LINK_SPEED_10M (1 << 2) /**< 10 Mbps full-duplex */
-#define ETH_LINK_SPEED_100M_HD (1 << 3) /**< 100 Mbps half-duplex */
-#define ETH_LINK_SPEED_100M (1 << 4) /**< 100 Mbps full-duplex */
-#define ETH_LINK_SPEED_1G (1 << 5) /**< 1 Gbps */
-#define ETH_LINK_SPEED_2_5G (1 << 6) /**< 2.5 Gbps */
-#define ETH_LINK_SPEED_5G (1 << 7) /**< 5 Gbps */
-#define ETH_LINK_SPEED_10G (1 << 8) /**< 10 Gbps */
-#define ETH_LINK_SPEED_20G (1 << 9) /**< 20 Gbps */
-#define ETH_LINK_SPEED_25G (1 << 10) /**< 25 Gbps */
-#define ETH_LINK_SPEED_40G (1 << 11) /**< 40 Gbps */
-#define ETH_LINK_SPEED_50G (1 << 12) /**< 50 Gbps */
-#define ETH_LINK_SPEED_56G (1 << 13) /**< 56 Gbps */
-#define ETH_LINK_SPEED_100G (1 << 14) /**< 100 Gbps */
-#define ETH_LINK_SPEED_200G (1 << 15) /**< 200 Gbps */
+#define RTE_ETH_LINK_SPEED_AUTONEG (0 << 0) /**< Autonegotiate (all speeds) */
+#define ETH_LINK_SPEED_AUTONEG RTE_ETH_LINK_SPEED_AUTONEG
+#define RTE_ETH_LINK_SPEED_FIXED (1 << 0) /**< Disable autoneg (fixed speed) */
+#define ETH_LINK_SPEED_FIXED RTE_ETH_LINK_SPEED_FIXED
+#define RTE_ETH_LINK_SPEED_10M_HD (1 << 1) /**< 10 Mbps half-duplex */
+#define ETH_LINK_SPEED_10M_HD RTE_ETH_LINK_SPEED_10M_HD
+#define RTE_ETH_LINK_SPEED_10M (1 << 2) /**< 10 Mbps full-duplex */
+#define ETH_LINK_SPEED_10M RTE_ETH_LINK_SPEED_10M
+#define RTE_ETH_LINK_SPEED_100M_HD (1 << 3) /**< 100 Mbps half-duplex */
+#define ETH_LINK_SPEED_100M_HD RTE_ETH_LINK_SPEED_100M_HD
+#define RTE_ETH_LINK_SPEED_100M (1 << 4) /**< 100 Mbps full-duplex */
+#define ETH_LINK_SPEED_100M RTE_ETH_LINK_SPEED_100M
+#define RTE_ETH_LINK_SPEED_1G (1 << 5) /**< 1 Gbps */
+#define ETH_LINK_SPEED_1G RTE_ETH_LINK_SPEED_1G
+#define RTE_ETH_LINK_SPEED_2_5G (1 << 6) /**< 2.5 Gbps */
+#define ETH_LINK_SPEED_2_5G RTE_ETH_LINK_SPEED_2_5G
+#define RTE_ETH_LINK_SPEED_5G (1 << 7) /**< 5 Gbps */
+#define ETH_LINK_SPEED_5G RTE_ETH_LINK_SPEED_5G
+#define RTE_ETH_LINK_SPEED_10G (1 << 8) /**< 10 Gbps */
+#define ETH_LINK_SPEED_10G RTE_ETH_LINK_SPEED_10G
+#define RTE_ETH_LINK_SPEED_20G (1 << 9) /**< 20 Gbps */
+#define ETH_LINK_SPEED_20G RTE_ETH_LINK_SPEED_20G
+#define RTE_ETH_LINK_SPEED_25G (1 << 10) /**< 25 Gbps */
+#define ETH_LINK_SPEED_25G RTE_ETH_LINK_SPEED_25G
+#define RTE_ETH_LINK_SPEED_40G (1 << 11) /**< 40 Gbps */
+#define ETH_LINK_SPEED_40G RTE_ETH_LINK_SPEED_40G
+#define RTE_ETH_LINK_SPEED_50G (1 << 12) /**< 50 Gbps */
+#define ETH_LINK_SPEED_50G RTE_ETH_LINK_SPEED_50G
+#define RTE_ETH_LINK_SPEED_56G (1 << 13) /**< 56 Gbps */
+#define ETH_LINK_SPEED_56G RTE_ETH_LINK_SPEED_56G
+#define RTE_ETH_LINK_SPEED_100G (1 << 14) /**< 100 Gbps */
+#define ETH_LINK_SPEED_100G RTE_ETH_LINK_SPEED_100G
+#define RTE_ETH_LINK_SPEED_200G (1 << 15) /**< 200 Gbps */
+#define ETH_LINK_SPEED_200G RTE_ETH_LINK_SPEED_200G
/**
* Ethernet numeric link speeds in Mbps
*/
-#define ETH_SPEED_NUM_NONE 0 /**< Not defined */
-#define ETH_SPEED_NUM_10M 10 /**< 10 Mbps */
-#define ETH_SPEED_NUM_100M 100 /**< 100 Mbps */
-#define ETH_SPEED_NUM_1G 1000 /**< 1 Gbps */
-#define ETH_SPEED_NUM_2_5G 2500 /**< 2.5 Gbps */
-#define ETH_SPEED_NUM_5G 5000 /**< 5 Gbps */
-#define ETH_SPEED_NUM_10G 10000 /**< 10 Gbps */
-#define ETH_SPEED_NUM_20G 20000 /**< 20 Gbps */
-#define ETH_SPEED_NUM_25G 25000 /**< 25 Gbps */
-#define ETH_SPEED_NUM_40G 40000 /**< 40 Gbps */
-#define ETH_SPEED_NUM_50G 50000 /**< 50 Gbps */
-#define ETH_SPEED_NUM_56G 56000 /**< 56 Gbps */
-#define ETH_SPEED_NUM_100G 100000 /**< 100 Gbps */
-#define ETH_SPEED_NUM_200G 200000 /**< 200 Gbps */
-#define ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define RTE_ETH_SPEED_NUM_NONE 0 /**< Not defined */
+#define ETH_SPEED_NUM_NONE RTE_ETH_SPEED_NUM_NONE
+#define RTE_ETH_SPEED_NUM_10M 10 /**< 10 Mbps */
+#define ETH_SPEED_NUM_10M RTE_ETH_SPEED_NUM_10M
+#define RTE_ETH_SPEED_NUM_100M 100 /**< 100 Mbps */
+#define ETH_SPEED_NUM_100M RTE_ETH_SPEED_NUM_100M
+#define RTE_ETH_SPEED_NUM_1G 1000 /**< 1 Gbps */
+#define ETH_SPEED_NUM_1G RTE_ETH_SPEED_NUM_1G
+#define RTE_ETH_SPEED_NUM_2_5G 2500 /**< 2.5 Gbps */
+#define ETH_SPEED_NUM_2_5G RTE_ETH_SPEED_NUM_2_5G
+#define RTE_ETH_SPEED_NUM_5G 5000 /**< 5 Gbps */
+#define ETH_SPEED_NUM_5G RTE_ETH_SPEED_NUM_5G
+#define RTE_ETH_SPEED_NUM_10G 10000 /**< 10 Gbps */
+#define ETH_SPEED_NUM_10G RTE_ETH_SPEED_NUM_10G
+#define RTE_ETH_SPEED_NUM_20G 20000 /**< 20 Gbps */
+#define ETH_SPEED_NUM_20G RTE_ETH_SPEED_NUM_20G
+#define RTE_ETH_SPEED_NUM_25G 25000 /**< 25 Gbps */
+#define ETH_SPEED_NUM_25G RTE_ETH_SPEED_NUM_25G
+#define RTE_ETH_SPEED_NUM_40G 40000 /**< 40 Gbps */
+#define ETH_SPEED_NUM_40G RTE_ETH_SPEED_NUM_40G
+#define RTE_ETH_SPEED_NUM_50G 50000 /**< 50 Gbps */
+#define ETH_SPEED_NUM_50G RTE_ETH_SPEED_NUM_50G
+#define RTE_ETH_SPEED_NUM_56G 56000 /**< 56 Gbps */
+#define ETH_SPEED_NUM_56G RTE_ETH_SPEED_NUM_56G
+#define RTE_ETH_SPEED_NUM_100G 100000 /**< 100 Gbps */
+#define ETH_SPEED_NUM_100G RTE_ETH_SPEED_NUM_100G
+#define RTE_ETH_SPEED_NUM_200G 200000 /**< 200 Gbps */
+#define ETH_SPEED_NUM_200G RTE_ETH_SPEED_NUM_200G
+#define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define ETH_SPEED_NUM_UNKNOWN RTE_ETH_SPEED_NUM_UNKNOWN
/**
* A structure used to retrieve link-level information of an Ethernet port.
@@ -328,12 +360,18 @@ struct rte_eth_link {
} __rte_aligned(8); /**< aligned for atomic64 read/write */
/* Utility constants */
-#define ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
-#define ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
-#define ETH_LINK_DOWN 0 /**< Link is down (see link_status). */
-#define ETH_LINK_UP 1 /**< Link is up (see link_status). */
-#define ETH_LINK_FIXED 0 /**< No autonegotiation (see link_autoneg). */
-#define ETH_LINK_AUTONEG 1 /**< Autonegotiated (see link_autoneg). */
+#define RTE_ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
+#define ETH_LINK_HALF_DUPLEX RTE_ETH_LINK_HALF_DUPLEX
+#define RTE_ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
+#define ETH_LINK_FULL_DUPLEX RTE_ETH_LINK_FULL_DUPLEX
+#define RTE_ETH_LINK_DOWN 0 /**< Link is down (see link_status). */
+#define ETH_LINK_DOWN RTE_ETH_LINK_DOWN
+#define RTE_ETH_LINK_UP 1 /**< Link is up (see link_status). */
+#define ETH_LINK_UP RTE_ETH_LINK_UP
+#define RTE_ETH_LINK_FIXED 0 /**< No autonegotiation (see link_autoneg). */
+#define ETH_LINK_FIXED RTE_ETH_LINK_FIXED
+#define RTE_ETH_LINK_AUTONEG 1 /**< Autonegotiated (see link_autoneg). */
+#define ETH_LINK_AUTONEG RTE_ETH_LINK_AUTONEG
#define RTE_ETH_LINK_MAX_STR_LEN 40 /**< Max length of default link string. */
/**
@@ -349,9 +387,12 @@ struct rte_eth_thresh {
/**
* Simple flags are used for rte_eth_conf.rxmode.mq_mode.
*/
-#define ETH_MQ_RX_RSS_FLAG 0x1
-#define ETH_MQ_RX_DCB_FLAG 0x2
-#define ETH_MQ_RX_VMDQ_FLAG 0x4
+#define RTE_ETH_MQ_RX_RSS_FLAG 0x1
+#define ETH_MQ_RX_RSS_FLAG RTE_ETH_MQ_RX_RSS_FLAG
+#define RTE_ETH_MQ_RX_DCB_FLAG 0x2
+#define ETH_MQ_RX_DCB_FLAG RTE_ETH_MQ_RX_DCB_FLAG
+#define RTE_ETH_MQ_RX_VMDQ_FLAG 0x4
+#define ETH_MQ_RX_VMDQ_FLAG RTE_ETH_MQ_RX_VMDQ_FLAG
/**
* A set of values to identify what method is to be used to route
@@ -359,50 +400,49 @@ struct rte_eth_thresh {
*/
enum rte_eth_rx_mq_mode {
/** None of DCB,RSS or VMDQ mode */
- ETH_MQ_RX_NONE = 0,
+ RTE_ETH_MQ_RX_NONE = 0,
/** For RX side, only RSS is on */
- ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
+ RTE_ETH_MQ_RX_RSS = RTE_ETH_MQ_RX_RSS_FLAG,
/** For RX side,only DCB is on. */
- ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
+ RTE_ETH_MQ_RX_DCB = RTE_ETH_MQ_RX_DCB_FLAG,
/** Both DCB and RSS enable */
- ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG,
+ RTE_ETH_MQ_RX_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
/** Only VMDQ, no RSS nor DCB */
- ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_ONLY = RTE_ETH_MQ_RX_VMDQ_FLAG,
/** RSS mode with VMDQ */
- ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG,
/** Use VMDQ+DCB to route traffic to queues */
- ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_DCB = RTE_ETH_MQ_RX_VMDQ_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
/** Enable both VMDQ and DCB in VMDq */
- ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG |
- ETH_MQ_RX_VMDQ_FLAG,
+ RTE_ETH_MQ_RX_VMDQ_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG |
+ RTE_ETH_MQ_RX_VMDQ_FLAG,
};
-/**
- * for rx mq mode backward compatible
- */
-#define ETH_RSS ETH_MQ_RX_RSS
-#define VMDQ_DCB ETH_MQ_RX_VMDQ_DCB
-#define ETH_DCB_RX ETH_MQ_RX_DCB
+#define ETH_MQ_RX_NONE RTE_ETH_MQ_RX_NONE
+#define ETH_MQ_RX_RSS RTE_ETH_MQ_RX_RSS
+#define ETH_MQ_RX_DCB RTE_ETH_MQ_RX_DCB
+#define ETH_MQ_RX_DCB_RSS RTE_ETH_MQ_RX_DCB_RSS
+#define ETH_MQ_RX_VMDQ_ONLY RTE_ETH_MQ_RX_VMDQ_ONLY
+#define ETH_MQ_RX_VMDQ_RSS RTE_ETH_MQ_RX_VMDQ_RSS
+#define ETH_MQ_RX_VMDQ_DCB RTE_ETH_MQ_RX_VMDQ_DCB
+#define ETH_MQ_RX_VMDQ_DCB_RSS RTE_ETH_MQ_RX_VMDQ_DCB_RSS
/**
* A set of values to identify what method is to be used to transmit
* packets using multi-TCs.
*/
enum rte_eth_tx_mq_mode {
- ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
- ETH_MQ_TX_DCB, /**< For TX side,only DCB is on. */
- ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on. */
- ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
+ RTE_ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
+ RTE_ETH_MQ_TX_DCB, /**< For TX side,only DCB is on. */
+ RTE_ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on. */
+ RTE_ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
};
-
-/**
- * for tx mq mode backward compatible
- */
-#define ETH_DCB_NONE ETH_MQ_TX_NONE
-#define ETH_VMDQ_DCB_TX ETH_MQ_TX_VMDQ_DCB
-#define ETH_DCB_TX ETH_MQ_TX_DCB
+#define ETH_MQ_TX_NONE RTE_ETH_MQ_TX_NONE
+#define ETH_MQ_TX_DCB RTE_ETH_MQ_TX_DCB
+#define ETH_MQ_TX_VMDQ_DCB RTE_ETH_MQ_TX_VMDQ_DCB
+#define ETH_MQ_TX_VMDQ_ONLY RTE_ETH_MQ_TX_VMDQ_ONLY
/**
* A structure used to configure the RX features of an Ethernet port.
@@ -415,7 +455,7 @@ struct rte_eth_rxmode {
uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
/**
- * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+ * Per-port Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
* Only offloads set on rx_offload_capa field on rte_eth_dev_info
* structure are allowed to be set.
*/
@@ -430,12 +470,17 @@ struct rte_eth_rxmode {
* Note that single VLAN is treated the same as inner VLAN.
*/
enum rte_vlan_type {
- ETH_VLAN_TYPE_UNKNOWN = 0,
- ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
- ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
- ETH_VLAN_TYPE_MAX,
+ RTE_ETH_VLAN_TYPE_UNKNOWN = 0,
+ RTE_ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
+ RTE_ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
+ RTE_ETH_VLAN_TYPE_MAX,
};
+#define ETH_VLAN_TYPE_UNKNOWN RTE_ETH_VLAN_TYPE_UNKNOWN
+#define ETH_VLAN_TYPE_INNER RTE_ETH_VLAN_TYPE_INNER
+#define ETH_VLAN_TYPE_OUTER RTE_ETH_VLAN_TYPE_OUTER
+#define ETH_VLAN_TYPE_MAX RTE_ETH_VLAN_TYPE_MAX
+
/**
* A structure used to describe a vlan filter.
* If the bit corresponding to a VID is set, such VID is on.
@@ -506,37 +551,68 @@ struct rte_eth_rss_conf {
* Below macros are defined for RSS offload types, they can be used to
* fill rte_eth_rss_conf.rss_hf or rte_flow_action_rss.types.
*/
-#define ETH_RSS_IPV4 (1ULL << 2)
-#define ETH_RSS_FRAG_IPV4 (1ULL << 3)
-#define ETH_RSS_NONFRAG_IPV4_TCP (1ULL << 4)
-#define ETH_RSS_NONFRAG_IPV4_UDP (1ULL << 5)
-#define ETH_RSS_NONFRAG_IPV4_SCTP (1ULL << 6)
-#define ETH_RSS_NONFRAG_IPV4_OTHER (1ULL << 7)
-#define ETH_RSS_IPV6 (1ULL << 8)
-#define ETH_RSS_FRAG_IPV6 (1ULL << 9)
-#define ETH_RSS_NONFRAG_IPV6_TCP (1ULL << 10)
-#define ETH_RSS_NONFRAG_IPV6_UDP (1ULL << 11)
-#define ETH_RSS_NONFRAG_IPV6_SCTP (1ULL << 12)
-#define ETH_RSS_NONFRAG_IPV6_OTHER (1ULL << 13)
-#define ETH_RSS_L2_PAYLOAD (1ULL << 14)
-#define ETH_RSS_IPV6_EX (1ULL << 15)
-#define ETH_RSS_IPV6_TCP_EX (1ULL << 16)
-#define ETH_RSS_IPV6_UDP_EX (1ULL << 17)
-#define ETH_RSS_PORT (1ULL << 18)
-#define ETH_RSS_VXLAN (1ULL << 19)
-#define ETH_RSS_GENEVE (1ULL << 20)
-#define ETH_RSS_NVGRE (1ULL << 21)
-#define ETH_RSS_GTPU (1ULL << 23)
-#define ETH_RSS_ETH (1ULL << 24)
-#define ETH_RSS_S_VLAN (1ULL << 25)
-#define ETH_RSS_C_VLAN (1ULL << 26)
-#define ETH_RSS_ESP (1ULL << 27)
-#define ETH_RSS_AH (1ULL << 28)
-#define ETH_RSS_L2TPV3 (1ULL << 29)
-#define ETH_RSS_PFCP (1ULL << 30)
-#define ETH_RSS_PPPOE (1ULL << 31)
-#define ETH_RSS_ECPRI (1ULL << 32)
-#define ETH_RSS_MPLS (1ULL << 33)
+#define RTE_ETH_RSS_IPV4 (1ULL << 2)
+#define ETH_RSS_IPV4 RTE_ETH_RSS_IPV4
+#define RTE_ETH_RSS_FRAG_IPV4 (1ULL << 3)
+#define ETH_RSS_FRAG_IPV4 RTE_ETH_RSS_FRAG_IPV4
+#define RTE_ETH_RSS_NONFRAG_IPV4_TCP (1ULL << 4)
+#define ETH_RSS_NONFRAG_IPV4_TCP RTE_ETH_RSS_NONFRAG_IPV4_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV4_UDP (1ULL << 5)
+#define ETH_RSS_NONFRAG_IPV4_UDP RTE_ETH_RSS_NONFRAG_IPV4_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV4_SCTP (1ULL << 6)
+#define ETH_RSS_NONFRAG_IPV4_SCTP RTE_ETH_RSS_NONFRAG_IPV4_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV4_OTHER (1ULL << 7)
+#define ETH_RSS_NONFRAG_IPV4_OTHER RTE_ETH_RSS_NONFRAG_IPV4_OTHER
+#define RTE_ETH_RSS_IPV6 (1ULL << 8)
+#define ETH_RSS_IPV6 RTE_ETH_RSS_IPV6
+#define RTE_ETH_RSS_FRAG_IPV6 (1ULL << 9)
+#define ETH_RSS_FRAG_IPV6 RTE_ETH_RSS_FRAG_IPV6
+#define RTE_ETH_RSS_NONFRAG_IPV6_TCP (1ULL << 10)
+#define ETH_RSS_NONFRAG_IPV6_TCP RTE_ETH_RSS_NONFRAG_IPV6_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV6_UDP (1ULL << 11)
+#define ETH_RSS_NONFRAG_IPV6_UDP RTE_ETH_RSS_NONFRAG_IPV6_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV6_SCTP (1ULL << 12)
+#define ETH_RSS_NONFRAG_IPV6_SCTP RTE_ETH_RSS_NONFRAG_IPV6_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV6_OTHER (1ULL << 13)
+#define ETH_RSS_NONFRAG_IPV6_OTHER RTE_ETH_RSS_NONFRAG_IPV6_OTHER
+#define RTE_ETH_RSS_L2_PAYLOAD (1ULL << 14)
+#define ETH_RSS_L2_PAYLOAD RTE_ETH_RSS_L2_PAYLOAD
+#define RTE_ETH_RSS_IPV6_EX (1ULL << 15)
+#define ETH_RSS_IPV6_EX RTE_ETH_RSS_IPV6_EX
+#define RTE_ETH_RSS_IPV6_TCP_EX (1ULL << 16)
+#define ETH_RSS_IPV6_TCP_EX RTE_ETH_RSS_IPV6_TCP_EX
+#define RTE_ETH_RSS_IPV6_UDP_EX (1ULL << 17)
+#define ETH_RSS_IPV6_UDP_EX RTE_ETH_RSS_IPV6_UDP_EX
+#define RTE_ETH_RSS_PORT (1ULL << 18)
+#define ETH_RSS_PORT RTE_ETH_RSS_PORT
+#define RTE_ETH_RSS_VXLAN (1ULL << 19)
+#define ETH_RSS_VXLAN RTE_ETH_RSS_VXLAN
+#define RTE_ETH_RSS_GENEVE (1ULL << 20)
+#define ETH_RSS_GENEVE RTE_ETH_RSS_GENEVE
+#define RTE_ETH_RSS_NVGRE (1ULL << 21)
+#define ETH_RSS_NVGRE RTE_ETH_RSS_NVGRE
+#define RTE_ETH_RSS_GTPU (1ULL << 23)
+#define ETH_RSS_GTPU RTE_ETH_RSS_GTPU
+#define RTE_ETH_RSS_ETH (1ULL << 24)
+#define ETH_RSS_ETH RTE_ETH_RSS_ETH
+#define RTE_ETH_RSS_S_VLAN (1ULL << 25)
+#define ETH_RSS_S_VLAN RTE_ETH_RSS_S_VLAN
+#define RTE_ETH_RSS_C_VLAN (1ULL << 26)
+#define ETH_RSS_C_VLAN RTE_ETH_RSS_C_VLAN
+#define RTE_ETH_RSS_ESP (1ULL << 27)
+#define ETH_RSS_ESP RTE_ETH_RSS_ESP
+#define RTE_ETH_RSS_AH (1ULL << 28)
+#define ETH_RSS_AH RTE_ETH_RSS_AH
+#define RTE_ETH_RSS_L2TPV3 (1ULL << 29)
+#define ETH_RSS_L2TPV3 RTE_ETH_RSS_L2TPV3
+#define RTE_ETH_RSS_PFCP (1ULL << 30)
+#define ETH_RSS_PFCP RTE_ETH_RSS_PFCP
+#define RTE_ETH_RSS_PPPOE (1ULL << 31)
+#define ETH_RSS_PPPOE RTE_ETH_RSS_PPPOE
+#define RTE_ETH_RSS_ECPRI (1ULL << 32)
+#define ETH_RSS_ECPRI RTE_ETH_RSS_ECPRI
+#define RTE_ETH_RSS_MPLS (1ULL << 33)
+#define ETH_RSS_MPLS RTE_ETH_RSS_MPLS
/*
* We use the following macros to combine with above ETH_RSS_* for
@@ -547,12 +623,18 @@ struct rte_eth_rss_conf {
* the same level are used simultaneously, it is the same case as none of
* them are added.
*/
-#define ETH_RSS_L3_SRC_ONLY (1ULL << 63)
-#define ETH_RSS_L3_DST_ONLY (1ULL << 62)
-#define ETH_RSS_L4_SRC_ONLY (1ULL << 61)
-#define ETH_RSS_L4_DST_ONLY (1ULL << 60)
-#define ETH_RSS_L2_SRC_ONLY (1ULL << 59)
-#define ETH_RSS_L2_DST_ONLY (1ULL << 58)
+#define RTE_ETH_RSS_L3_SRC_ONLY (1ULL << 63)
+#define ETH_RSS_L3_SRC_ONLY RTE_ETH_RSS_L3_SRC_ONLY
+#define RTE_ETH_RSS_L3_DST_ONLY (1ULL << 62)
+#define ETH_RSS_L3_DST_ONLY RTE_ETH_RSS_L3_DST_ONLY
+#define RTE_ETH_RSS_L4_SRC_ONLY (1ULL << 61)
+#define ETH_RSS_L4_SRC_ONLY RTE_ETH_RSS_L4_SRC_ONLY
+#define RTE_ETH_RSS_L4_DST_ONLY (1ULL << 60)
+#define ETH_RSS_L4_DST_ONLY RTE_ETH_RSS_L4_DST_ONLY
+#define RTE_ETH_RSS_L2_SRC_ONLY (1ULL << 59)
+#define ETH_RSS_L2_SRC_ONLY RTE_ETH_RSS_L2_SRC_ONLY
+#define RTE_ETH_RSS_L2_DST_ONLY (1ULL << 58)
+#define ETH_RSS_L2_DST_ONLY RTE_ETH_RSS_L2_DST_ONLY
/*
* Only select IPV6 address prefix as RSS input set according to
@@ -580,22 +662,27 @@ struct rte_eth_rss_conf {
* It basically stands for the innermost encapsulation level RSS
* can be performed on according to PMD and device capabilities.
*/
-#define ETH_RSS_LEVEL_PMD_DEFAULT (0ULL << 50)
+#define RTE_ETH_RSS_LEVEL_PMD_DEFAULT (0ULL << 50)
+#define ETH_RSS_LEVEL_PMD_DEFAULT RTE_ETH_RSS_LEVEL_PMD_DEFAULT
/**
* level 1, requests RSS to be performed on the outermost packet
* encapsulation level.
*/
-#define ETH_RSS_LEVEL_OUTERMOST (1ULL << 50)
+#define RTE_ETH_RSS_LEVEL_OUTERMOST (1ULL << 50)
+#define ETH_RSS_LEVEL_OUTERMOST RTE_ETH_RSS_LEVEL_OUTERMOST
/**
* level 2, requests RSS to be performed on the specified inner packet
* encapsulation level, from outermost to innermost (lower to higher values).
*/
-#define ETH_RSS_LEVEL_INNERMOST (2ULL << 50)
-#define ETH_RSS_LEVEL_MASK (3ULL << 50)
+#define RTE_ETH_RSS_LEVEL_INNERMOST (2ULL << 50)
+#define ETH_RSS_LEVEL_INNERMOST RTE_ETH_RSS_LEVEL_INNERMOST
+#define RTE_ETH_RSS_LEVEL_MASK (3ULL << 50)
+#define ETH_RSS_LEVEL_MASK RTE_ETH_RSS_LEVEL_MASK
-#define ETH_RSS_LEVEL(rss_hf) ((rss_hf & ETH_RSS_LEVEL_MASK) >> 50)
+#define RTE_ETH_RSS_LEVEL(rss_hf) ((rss_hf & RTE_ETH_RSS_LEVEL_MASK) >> 50)
+#define ETH_RSS_LEVEL(rss_hf) RTE_ETH_RSS_LEVEL(rss_hf)
/**
* For input set change of hash filter, if SRC_ONLY and DST_ONLY of
@@ -619,213 +706,277 @@ rte_eth_rss_hf_refine(uint64_t rss_hf)
return rss_hf;
}
-#define ETH_RSS_IPV6_PRE32 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE32 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32 RTE_ETH_RSS_IPV6_PRE32
-#define ETH_RSS_IPV6_PRE40 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE40 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40 RTE_ETH_RSS_IPV6_PRE40
-#define ETH_RSS_IPV6_PRE48 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE48 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48 RTE_ETH_RSS_IPV6_PRE48
-#define ETH_RSS_IPV6_PRE56 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE56 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56 RTE_ETH_RSS_IPV6_PRE56
-#define ETH_RSS_IPV6_PRE64 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE64 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64 RTE_ETH_RSS_IPV6_PRE64
-#define ETH_RSS_IPV6_PRE96 ( \
- ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE96 ( \
+ RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96 RTE_ETH_RSS_IPV6_PRE96
-#define ETH_RSS_IPV6_PRE32_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE32_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_UDP RTE_ETH_RSS_IPV6_PRE32_UDP
-#define ETH_RSS_IPV6_PRE40_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE40_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_UDP RTE_ETH_RSS_IPV6_PRE40_UDP
-#define ETH_RSS_IPV6_PRE48_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE48_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_UDP RTE_ETH_RSS_IPV6_PRE48_UDP
-#define ETH_RSS_IPV6_PRE56_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE56_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_UDP RTE_ETH_RSS_IPV6_PRE56_UDP
-#define ETH_RSS_IPV6_PRE64_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE64_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_UDP RTE_ETH_RSS_IPV6_PRE64_UDP
-#define ETH_RSS_IPV6_PRE96_UDP ( \
- ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE96_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_UDP RTE_ETH_RSS_IPV6_PRE96_UDP
-#define ETH_RSS_IPV6_PRE32_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE32_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_TCP RTE_ETH_RSS_IPV6_PRE32_TCP
-#define ETH_RSS_IPV6_PRE40_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE40_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_TCP RTE_ETH_RSS_IPV6_PRE40_TCP
-#define ETH_RSS_IPV6_PRE48_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE48_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_TCP RTE_ETH_RSS_IPV6_PRE48_TCP
-#define ETH_RSS_IPV6_PRE56_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE56_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_TCP RTE_ETH_RSS_IPV6_PRE56_TCP
-#define ETH_RSS_IPV6_PRE64_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE64_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_TCP RTE_ETH_RSS_IPV6_PRE64_TCP
-#define ETH_RSS_IPV6_PRE96_TCP ( \
- ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE96_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_TCP RTE_ETH_RSS_IPV6_PRE96_TCP
-#define ETH_RSS_IPV6_PRE32_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE32_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_SCTP RTE_ETH_RSS_IPV6_PRE32_SCTP
-#define ETH_RSS_IPV6_PRE40_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE40_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_SCTP RTE_ETH_RSS_IPV6_PRE40_SCTP
-#define ETH_RSS_IPV6_PRE48_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE48_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_SCTP RTE_ETH_RSS_IPV6_PRE48_SCTP
-#define ETH_RSS_IPV6_PRE56_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE56_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_SCTP RTE_ETH_RSS_IPV6_PRE56_SCTP
-#define ETH_RSS_IPV6_PRE64_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE64_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_SCTP RTE_ETH_RSS_IPV6_PRE64_SCTP
-#define ETH_RSS_IPV6_PRE96_SCTP ( \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE96_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE96)
-
-#define ETH_RSS_IP ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_IPV6_EX)
-
-#define ETH_RSS_UDP ( \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_IPV6_UDP_EX)
-
-#define ETH_RSS_TCP ( \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_IPV6_TCP_EX)
-
-#define ETH_RSS_SCTP ( \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV6_SCTP)
-
-#define ETH_RSS_TUNNEL ( \
- ETH_RSS_VXLAN | \
- ETH_RSS_GENEVE | \
- ETH_RSS_NVGRE)
-
-#define ETH_RSS_VLAN ( \
- ETH_RSS_S_VLAN | \
- ETH_RSS_C_VLAN)
+#define ETH_RSS_IPV6_PRE96_SCTP RTE_ETH_RSS_IPV6_PRE96_SCTP
+
+#define RTE_ETH_RSS_IP ( \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_IPV6_EX)
+#define ETH_RSS_IP RTE_ETH_RSS_IP
+
+#define RTE_ETH_RSS_UDP ( \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
+#define ETH_RSS_UDP RTE_ETH_RSS_UDP
+
+#define RTE_ETH_RSS_TCP ( \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_IPV6_TCP_EX)
+#define ETH_RSS_TCP RTE_ETH_RSS_TCP
+
+#define RTE_ETH_RSS_SCTP ( \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+#define ETH_RSS_SCTP RTE_ETH_RSS_SCTP
+
+#define RTE_ETH_RSS_TUNNEL ( \
+ RTE_ETH_RSS_VXLAN | \
+ RTE_ETH_RSS_GENEVE | \
+ RTE_ETH_RSS_NVGRE)
+#define ETH_RSS_TUNNEL RTE_ETH_RSS_TUNNEL
+
+#define RTE_ETH_RSS_VLAN ( \
+ RTE_ETH_RSS_S_VLAN | \
+ RTE_ETH_RSS_C_VLAN)
+#define ETH_RSS_VLAN RTE_ETH_RSS_VLAN
/**< Mask of valid RSS hash protocols */
-#define ETH_RSS_PROTO_MASK ( \
- ETH_RSS_IPV4 | \
- ETH_RSS_FRAG_IPV4 | \
- ETH_RSS_NONFRAG_IPV4_TCP | \
- ETH_RSS_NONFRAG_IPV4_UDP | \
- ETH_RSS_NONFRAG_IPV4_SCTP | \
- ETH_RSS_NONFRAG_IPV4_OTHER | \
- ETH_RSS_IPV6 | \
- ETH_RSS_FRAG_IPV6 | \
- ETH_RSS_NONFRAG_IPV6_TCP | \
- ETH_RSS_NONFRAG_IPV6_UDP | \
- ETH_RSS_NONFRAG_IPV6_SCTP | \
- ETH_RSS_NONFRAG_IPV6_OTHER | \
- ETH_RSS_L2_PAYLOAD | \
- ETH_RSS_IPV6_EX | \
- ETH_RSS_IPV6_TCP_EX | \
- ETH_RSS_IPV6_UDP_EX | \
- ETH_RSS_PORT | \
- ETH_RSS_VXLAN | \
- ETH_RSS_GENEVE | \
- ETH_RSS_NVGRE | \
- ETH_RSS_MPLS)
+#define RTE_ETH_RSS_PROTO_MASK ( \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_L2_PAYLOAD | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX | \
+ RTE_ETH_RSS_PORT | \
+ RTE_ETH_RSS_VXLAN | \
+ RTE_ETH_RSS_GENEVE | \
+ RTE_ETH_RSS_NVGRE | \
+ RTE_ETH_RSS_MPLS)
+#define ETH_RSS_PROTO_MASK RTE_ETH_RSS_PROTO_MASK
/*
* Definitions used for redirection table entry size.
* Some RSS RETA sizes may not be supported by some drivers, check the
* documentation or the description of relevant functions for more details.
*/
-#define ETH_RSS_RETA_SIZE_64 64
-#define ETH_RSS_RETA_SIZE_128 128
-#define ETH_RSS_RETA_SIZE_256 256
-#define ETH_RSS_RETA_SIZE_512 512
-#define RTE_RETA_GROUP_SIZE 64
+#define RTE_ETH_RSS_RETA_SIZE_64 64
+#define ETH_RSS_RETA_SIZE_64 RTE_ETH_RSS_RETA_SIZE_64
+#define RTE_ETH_RSS_RETA_SIZE_128 128
+#define ETH_RSS_RETA_SIZE_128 RTE_ETH_RSS_RETA_SIZE_128
+#define RTE_ETH_RSS_RETA_SIZE_256 256
+#define ETH_RSS_RETA_SIZE_256 RTE_ETH_RSS_RETA_SIZE_256
+#define RTE_ETH_RSS_RETA_SIZE_512 512
+#define ETH_RSS_RETA_SIZE_512 RTE_ETH_RSS_RETA_SIZE_512
+#define RTE_ETH_RETA_GROUP_SIZE 64
+#define RTE_RETA_GROUP_SIZE RTE_ETH_RETA_GROUP_SIZE
/* Definitions used for VMDQ and DCB functionality */
-#define ETH_VMDQ_MAX_VLAN_FILTERS 64 /**< Maximum nb. of VMDQ vlan filters. */
-#define ETH_DCB_NUM_USER_PRIORITIES 8 /**< Maximum nb. of DCB priorities. */
-#define ETH_VMDQ_DCB_NUM_QUEUES 128 /**< Maximum nb. of VMDQ DCB queues. */
-#define ETH_DCB_NUM_QUEUES 128 /**< Maximum nb. of DCB queues. */
+#define RTE_ETH_VMDQ_MAX_VLAN_FILTERS 64 /**< Maximum nb. of VMDQ vlan filters. */
+#define ETH_VMDQ_MAX_VLAN_FILTERS RTE_ETH_VMDQ_MAX_VLAN_FILTERS
+#define RTE_ETH_DCB_NUM_USER_PRIORITIES 8 /**< Maximum nb. of DCB priorities. */
+#define ETH_DCB_NUM_USER_PRIORITIES RTE_ETH_DCB_NUM_USER_PRIORITIES
+#define RTE_ETH_VMDQ_DCB_NUM_QUEUES 128 /**< Maximum nb. of VMDQ DCB queues. */
+#define ETH_VMDQ_DCB_NUM_QUEUES RTE_ETH_VMDQ_DCB_NUM_QUEUES
+#define RTE_ETH_DCB_NUM_QUEUES 128 /**< Maximum nb. of DCB queues. */
+#define ETH_DCB_NUM_QUEUES RTE_ETH_DCB_NUM_QUEUES
/* DCB capability defines */
-#define ETH_DCB_PG_SUPPORT 0x00000001 /**< Priority Group(ETS) support. */
-#define ETH_DCB_PFC_SUPPORT 0x00000002 /**< Priority Flow Control support. */
+#define RTE_ETH_DCB_PG_SUPPORT 0x00000001 /**< Priority Group(ETS) support. */
+#define ETH_DCB_PG_SUPPORT RTE_ETH_DCB_PG_SUPPORT
+#define RTE_ETH_DCB_PFC_SUPPORT 0x00000002 /**< Priority Flow Control support. */
+#define ETH_DCB_PFC_SUPPORT RTE_ETH_DCB_PFC_SUPPORT
/* Definitions used for VLAN Offload functionality */
-#define ETH_VLAN_STRIP_OFFLOAD 0x0001 /**< VLAN Strip On/Off */
-#define ETH_VLAN_FILTER_OFFLOAD 0x0002 /**< VLAN Filter On/Off */
-#define ETH_VLAN_EXTEND_OFFLOAD 0x0004 /**< VLAN Extend On/Off */
-#define ETH_QINQ_STRIP_OFFLOAD 0x0008 /**< QINQ Strip On/Off */
+#define RTE_ETH_VLAN_STRIP_OFFLOAD 0x0001 /**< VLAN Strip On/Off */
+#define ETH_VLAN_STRIP_OFFLOAD RTE_ETH_VLAN_STRIP_OFFLOAD
+#define RTE_ETH_VLAN_FILTER_OFFLOAD 0x0002 /**< VLAN Filter On/Off */
+#define ETH_VLAN_FILTER_OFFLOAD RTE_ETH_VLAN_FILTER_OFFLOAD
+#define RTE_ETH_VLAN_EXTEND_OFFLOAD 0x0004 /**< VLAN Extend On/Off */
+#define ETH_VLAN_EXTEND_OFFLOAD RTE_ETH_VLAN_EXTEND_OFFLOAD
+#define RTE_ETH_QINQ_STRIP_OFFLOAD 0x0008 /**< QINQ Strip On/Off */
+#define ETH_QINQ_STRIP_OFFLOAD RTE_ETH_QINQ_STRIP_OFFLOAD
/* Definitions used for mask VLAN setting */
-#define ETH_VLAN_STRIP_MASK 0x0001 /**< VLAN Strip setting mask */
-#define ETH_VLAN_FILTER_MASK 0x0002 /**< VLAN Filter setting mask*/
-#define ETH_VLAN_EXTEND_MASK 0x0004 /**< VLAN Extend setting mask*/
-#define ETH_QINQ_STRIP_MASK 0x0008 /**< QINQ Strip setting mask */
-#define ETH_VLAN_ID_MAX 0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define RTE_ETH_VLAN_STRIP_MASK 0x0001 /**< VLAN Strip setting mask */
+#define ETH_VLAN_STRIP_MASK RTE_ETH_VLAN_STRIP_MASK
+#define RTE_ETH_VLAN_FILTER_MASK 0x0002 /**< VLAN Filter setting mask*/
+#define ETH_VLAN_FILTER_MASK RTE_ETH_VLAN_FILTER_MASK
+#define RTE_ETH_VLAN_EXTEND_MASK 0x0004 /**< VLAN Extend setting mask*/
+#define ETH_VLAN_EXTEND_MASK RTE_ETH_VLAN_EXTEND_MASK
+#define RTE_ETH_QINQ_STRIP_MASK 0x0008 /**< QINQ Strip setting mask */
+#define ETH_QINQ_STRIP_MASK RTE_ETH_QINQ_STRIP_MASK
+#define RTE_ETH_VLAN_ID_MAX 0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define ETH_VLAN_ID_MAX RTE_ETH_VLAN_ID_MAX
/* Definitions used for receive MAC address */
-#define ETH_NUM_RECEIVE_MAC_ADDR 128 /**< Maximum nb. of receive mac addr. */
+#define RTE_ETH_NUM_RECEIVE_MAC_ADDR 128 /**< Maximum nb. of receive mac addr. */
+#define ETH_NUM_RECEIVE_MAC_ADDR RTE_ETH_NUM_RECEIVE_MAC_ADDR
/* Definitions used for unicast hash */
-#define ETH_VMDQ_NUM_UC_HASH_ARRAY 128 /**< Maximum nb. of UC hash array. */
+#define RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY 128 /**< Maximum nb. of UC hash array. */
+#define ETH_VMDQ_NUM_UC_HASH_ARRAY RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY
/* Definitions used for VMDQ pool rx mode setting */
-#define ETH_VMDQ_ACCEPT_UNTAG 0x0001 /**< accept untagged packets. */
-#define ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
-#define ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
-#define ETH_VMDQ_ACCEPT_BROADCAST 0x0008 /**< accept broadcast packets. */
-#define ETH_VMDQ_ACCEPT_MULTICAST 0x0010 /**< multicast promiscuous. */
+#define RTE_ETH_VMDQ_ACCEPT_UNTAG 0x0001 /**< accept untagged packets. */
+#define ETH_VMDQ_ACCEPT_UNTAG RTE_ETH_VMDQ_ACCEPT_UNTAG
+#define RTE_ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
+#define ETH_VMDQ_ACCEPT_HASH_MC RTE_ETH_VMDQ_ACCEPT_HASH_MC
+#define RTE_ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
+#define ETH_VMDQ_ACCEPT_HASH_UC RTE_ETH_VMDQ_ACCEPT_HASH_UC
+#define RTE_ETH_VMDQ_ACCEPT_BROADCAST 0x0008 /**< accept broadcast packets. */
+#define ETH_VMDQ_ACCEPT_BROADCAST RTE_ETH_VMDQ_ACCEPT_BROADCAST
+#define RTE_ETH_VMDQ_ACCEPT_MULTICAST 0x0010 /**< multicast promiscuous. */
+#define ETH_VMDQ_ACCEPT_MULTICAST RTE_ETH_VMDQ_ACCEPT_MULTICAST
/** Maximum nb. of vlan per mirror rule */
-#define ETH_MIRROR_MAX_VLANS 64
+#define RTE_ETH_MIRROR_MAX_VLANS 64
+#define ETH_MIRROR_MAX_VLANS RTE_ETH_MIRROR_MAX_VLANS
-#define ETH_MIRROR_VIRTUAL_POOL_UP 0x01 /**< Virtual Pool uplink Mirroring. */
-#define ETH_MIRROR_UPLINK_PORT 0x02 /**< Uplink Port Mirroring. */
-#define ETH_MIRROR_DOWNLINK_PORT 0x04 /**< Downlink Port Mirroring. */
-#define ETH_MIRROR_VLAN 0x08 /**< VLAN Mirroring. */
-#define ETH_MIRROR_VIRTUAL_POOL_DOWN 0x10 /**< Virtual Pool downlink Mirroring. */
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_UP 0x01 /**< Virtual Pool uplink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_UP RTE_ETH_MIRROR_VIRTUAL_POOL_UP
+#define RTE_ETH_MIRROR_UPLINK_PORT 0x02 /**< Uplink Port Mirroring. */
+#define ETH_MIRROR_UPLINK_PORT RTE_ETH_MIRROR_UPLINK_PORT
+#define RTE_ETH_MIRROR_DOWNLINK_PORT 0x04 /**< Downlink Port Mirroring. */
+#define ETH_MIRROR_DOWNLINK_PORT RTE_ETH_MIRROR_DOWNLINK_PORT
+#define RTE_ETH_MIRROR_VLAN 0x08 /**< VLAN Mirroring. */
+#define ETH_MIRROR_VLAN RTE_ETH_MIRROR_VLAN
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN 0x10 /**< Virtual Pool downlink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_DOWN RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN
/**
* A structure used to configure VLAN traffic mirror of an Ethernet port.
@@ -865,20 +1016,26 @@ struct rte_eth_rss_reta_entry64 {
* in DCB configurations
*/
enum rte_eth_nb_tcs {
- ETH_4_TCS = 4, /**< 4 TCs with DCB. */
- ETH_8_TCS = 8 /**< 8 TCs with DCB. */
+ RTE_ETH_4_TCS = 4, /**< 4 TCs with DCB. */
+ RTE_ETH_8_TCS = 8 /**< 8 TCs with DCB. */
};
+#define ETH_4_TCS RTE_ETH_4_TCS
+#define ETH_8_TCS RTE_ETH_8_TCS
/**
* This enum indicates the possible number of queue pools
* in VMDQ configurations.
*/
enum rte_eth_nb_pools {
- ETH_8_POOLS = 8, /**< 8 VMDq pools. */
- ETH_16_POOLS = 16, /**< 16 VMDq pools. */
- ETH_32_POOLS = 32, /**< 32 VMDq pools. */
- ETH_64_POOLS = 64 /**< 64 VMDq pools. */
+ RTE_ETH_8_POOLS = 8, /**< 8 VMDq pools. */
+ RTE_ETH_16_POOLS = 16, /**< 16 VMDq pools. */
+ RTE_ETH_32_POOLS = 32, /**< 32 VMDq pools. */
+ RTE_ETH_64_POOLS = 64 /**< 64 VMDq pools. */
};
+#define ETH_8_POOLS RTE_ETH_8_POOLS
+#define ETH_16_POOLS RTE_ETH_16_POOLS
+#define ETH_32_POOLS RTE_ETH_32_POOLS
+#define ETH_64_POOLS RTE_ETH_64_POOLS
/* This structure may be extended in future. */
struct rte_eth_dcb_rx_conf {
@@ -964,7 +1121,7 @@ struct rte_eth_vmdq_rx_conf {
struct rte_eth_txmode {
enum rte_eth_tx_mq_mode mq_mode; /**< TX multi-queues mode. */
/**
- * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+ * Per-port Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags.
* Only offloads set on tx_offload_capa field on rte_eth_dev_info
* structure are allowed to be set.
*/
@@ -1048,7 +1205,7 @@ struct rte_eth_rxconf {
uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
uint16_t rx_nseg; /**< Number of descriptions in rx_seg array. */
/**
- * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+ * Per-queue Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
* Only offloads set on rx_queue_offload_capa or rx_offload_capa
* fields on rte_eth_dev_info structure are allowed to be set.
*/
@@ -1077,7 +1234,7 @@ struct rte_eth_txconf {
uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
/**
- * Per-queue Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+ * Per-queue Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags.
* Only offloads set on tx_queue_offload_capa or tx_offload_capa
* fields on rte_eth_dev_info structure are allowed to be set.
*/
@@ -1188,12 +1345,17 @@ struct rte_eth_desc_lim {
* This enum indicates the flow control mode
*/
enum rte_eth_fc_mode {
- RTE_FC_NONE = 0, /**< Disable flow control. */
- RTE_FC_RX_PAUSE, /**< RX pause frame, enable flowctrl on TX side. */
- RTE_FC_TX_PAUSE, /**< TX pause frame, enable flowctrl on RX side. */
- RTE_FC_FULL /**< Enable flow control on both side. */
+ RTE_ETH_FC_NONE = 0, /**< Disable flow control. */
+ RTE_ETH_FC_RX_PAUSE, /**< RX pause frame, enable flowctrl on TX side. */
+ RTE_ETH_FC_TX_PAUSE, /**< TX pause frame, enable flowctrl on RX side. */
+ RTE_ETH_FC_FULL /**< Enable flow control on both side. */
};
+#define RTE_FC_NONE RTE_ETH_FC_NONE
+#define RTE_FC_RX_PAUSE RTE_ETH_FC_RX_PAUSE
+#define RTE_FC_TX_PAUSE RTE_ETH_FC_TX_PAUSE
+#define RTE_FC_FULL RTE_ETH_FC_FULL
+
/**
* A structure used to configure Ethernet flow control parameter.
* These parameters will be configured into the register of the NIC.
@@ -1224,18 +1386,29 @@ struct rte_eth_pfc_conf {
* @see rte_eth_udp_tunnel
*/
enum rte_eth_tunnel_type {
- RTE_TUNNEL_TYPE_NONE = 0,
- RTE_TUNNEL_TYPE_VXLAN,
- RTE_TUNNEL_TYPE_GENEVE,
- RTE_TUNNEL_TYPE_TEREDO,
- RTE_TUNNEL_TYPE_NVGRE,
- RTE_TUNNEL_TYPE_IP_IN_GRE,
- RTE_L2_TUNNEL_TYPE_E_TAG,
- RTE_TUNNEL_TYPE_VXLAN_GPE,
- RTE_TUNNEL_TYPE_ECPRI,
- RTE_TUNNEL_TYPE_MAX,
+ RTE_ETH_TUNNEL_TYPE_NONE = 0,
+ RTE_ETH_TUNNEL_TYPE_VXLAN,
+ RTE_ETH_TUNNEL_TYPE_GENEVE,
+ RTE_ETH_TUNNEL_TYPE_TEREDO,
+ RTE_ETH_TUNNEL_TYPE_NVGRE,
+ RTE_ETH_TUNNEL_TYPE_IP_IN_GRE,
+ RTE_ETH_L2_TUNNEL_TYPE_E_TAG,
+ RTE_ETH_TUNNEL_TYPE_VXLAN_GPE,
+ RTE_ETH_TUNNEL_TYPE_ECPRI,
+ RTE_ETH_TUNNEL_TYPE_MAX,
};
+#define RTE_TUNNEL_TYPE_NONE RTE_ETH_TUNNEL_TYPE_NONE
+#define RTE_TUNNEL_TYPE_VXLAN RTE_ETH_TUNNEL_TYPE_VXLAN
+#define RTE_TUNNEL_TYPE_GENEVE RTE_ETH_TUNNEL_TYPE_GENEVE
+#define RTE_TUNNEL_TYPE_TEREDO RTE_ETH_TUNNEL_TYPE_TEREDO
+#define RTE_TUNNEL_TYPE_NVGRE RTE_ETH_TUNNEL_TYPE_NVGRE
+#define RTE_TUNNEL_TYPE_IP_IN_GRE RTE_ETH_TUNNEL_TYPE_IP_IN_GRE
+#define RTE_L2_TUNNEL_TYPE_E_TAG RTE_ETH_L2_TUNNEL_TYPE_E_TAG
+#define RTE_TUNNEL_TYPE_VXLAN_GPE RTE_ETH_TUNNEL_TYPE_VXLAN_GPE
+#define RTE_TUNNEL_TYPE_ECPRI RTE_ETH_TUNNEL_TYPE_ECPRI
+#define RTE_TUNNEL_TYPE_MAX RTE_ETH_TUNNEL_TYPE_MAX
+
/* Deprecated API file for rte_eth_dev_filter_* functions */
#include "rte_eth_ctrl.h"
@@ -1243,11 +1416,16 @@ enum rte_eth_tunnel_type {
* Memory space that can be configured to store Flow Director filters
* in the board memory.
*/
-enum rte_fdir_pballoc_type {
- RTE_FDIR_PBALLOC_64K = 0, /**< 64k. */
- RTE_FDIR_PBALLOC_128K, /**< 128k. */
- RTE_FDIR_PBALLOC_256K, /**< 256k. */
+enum rte_eth_fdir_pballoc_type {
+ RTE_ETH_FDIR_PBALLOC_64K = 0, /**< 64k. */
+ RTE_ETH_FDIR_PBALLOC_128K, /**< 128k. */
+ RTE_ETH_FDIR_PBALLOC_256K, /**< 256k. */
};
+#define rte_fdir_pballoc_type rte_eth_fdir_pballoc_type
+
+#define RTE_FDIR_PBALLOC_64K RTE_ETH_FDIR_PBALLOC_64K
+#define RTE_FDIR_PBALLOC_128K RTE_ETH_FDIR_PBALLOC_128K
+#define RTE_FDIR_PBALLOC_256K RTE_ETH_FDIR_PBALLOC_256K
/**
* Select report mode of FDIR hash information in RX descriptors.
@@ -1264,9 +1442,9 @@ enum rte_fdir_status_mode {
*
* If mode is RTE_FDIR_MODE_NONE, the pballoc value is ignored.
*/
-struct rte_fdir_conf {
+struct rte_eth_fdir_conf {
enum rte_fdir_mode mode; /**< Flow Director mode. */
- enum rte_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
+ enum rte_eth_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
enum rte_fdir_status_mode status; /**< How to report FDIR hash. */
/** RX queue of packets matching a "drop" filter in perfect mode. */
uint8_t drop_queue;
@@ -1275,6 +1453,8 @@ struct rte_fdir_conf {
/**< Flex payload configuration. */
};
+#define rte_fdir_conf rte_eth_fdir_conf
+
/**
* UDP tunneling configuration.
*
@@ -1292,7 +1472,7 @@ struct rte_eth_udp_tunnel {
/**
* A structure used to enable/disable specific device interrupts.
*/
-struct rte_intr_conf {
+struct rte_eth_intr_conf {
/** enable/disable lsc interrupt. 0 (default) - disable, 1 enable */
uint32_t lsc:1;
/** enable/disable rxq interrupt. 0 (default) - disable, 1 enable */
@@ -1301,6 +1481,8 @@ struct rte_intr_conf {
uint32_t rmv:1;
};
+#define rte_intr_conf rte_eth_intr_conf
+
/**
* A structure used to configure an Ethernet port.
* Depending upon the RX multi-queue mode, extra advanced
@@ -1348,39 +1530,60 @@ struct rte_eth_conf {
/**
* RX offload capabilities of a device.
*/
-#define DEV_RX_OFFLOAD_VLAN_STRIP 0x00000001
-#define DEV_RX_OFFLOAD_IPV4_CKSUM 0x00000002
-#define DEV_RX_OFFLOAD_UDP_CKSUM 0x00000004
-#define DEV_RX_OFFLOAD_TCP_CKSUM 0x00000008
-#define DEV_RX_OFFLOAD_TCP_LRO 0x00000010
-#define DEV_RX_OFFLOAD_QINQ_STRIP 0x00000020
-#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
-#define DEV_RX_OFFLOAD_MACSEC_STRIP 0x00000080
-#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100
-#define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
-#define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
-#define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
-#define DEV_RX_OFFLOAD_SCATTER 0x00002000
+#define RTE_ETH_RX_OFFLOAD_VLAN_STRIP 0x00000001
+#define DEV_RX_OFFLOAD_VLAN_STRIP RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+#define RTE_ETH_RX_OFFLOAD_IPV4_CKSUM 0x00000002
+#define DEV_RX_OFFLOAD_IPV4_CKSUM RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_UDP_CKSUM 0x00000004
+#define DEV_RX_OFFLOAD_UDP_CKSUM RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_CKSUM 0x00000008
+#define DEV_RX_OFFLOAD_TCP_CKSUM RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_LRO 0x00000010
+#define DEV_RX_OFFLOAD_TCP_LRO RTE_ETH_RX_OFFLOAD_TCP_LRO
+#define RTE_ETH_RX_OFFLOAD_QINQ_STRIP 0x00000020
+#define DEV_RX_OFFLOAD_QINQ_STRIP RTE_ETH_RX_OFFLOAD_QINQ_STRIP
+#define RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
+#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_MACSEC_STRIP 0x00000080
+#define DEV_RX_OFFLOAD_MACSEC_STRIP RTE_ETH_RX_OFFLOAD_MACSEC_STRIP
+#define RTE_ETH_RX_OFFLOAD_HEADER_SPLIT 0x00000100
+#define DEV_RX_OFFLOAD_HEADER_SPLIT RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
+#define RTE_ETH_RX_OFFLOAD_VLAN_FILTER 0x00000200
+#define DEV_RX_OFFLOAD_VLAN_FILTER RTE_ETH_RX_OFFLOAD_VLAN_FILTER
+#define RTE_ETH_RX_OFFLOAD_VLAN_EXTEND 0x00000400
+#define DEV_RX_OFFLOAD_VLAN_EXTEND RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
+#define RTE_ETH_RX_OFFLOAD_JUMBO_FRAME 0x00000800
+#define DEV_RX_OFFLOAD_JUMBO_FRAME RTE_ETH_RX_OFFLOAD_JUMBO_FRAME
+#define RTE_ETH_RX_OFFLOAD_SCATTER 0x00002000
+#define DEV_RX_OFFLOAD_SCATTER RTE_ETH_RX_OFFLOAD_SCATTER
/**
* Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
* and RTE_MBUF_DYNFLAG_RX_TIMESTAMP_NAME is set in ol_flags.
* The mbuf field and flag are registered when the offload is configured.
*/
-#define DEV_RX_OFFLOAD_TIMESTAMP 0x00004000
-#define DEV_RX_OFFLOAD_SECURITY 0x00008000
-#define DEV_RX_OFFLOAD_KEEP_CRC 0x00010000
-#define DEV_RX_OFFLOAD_SCTP_CKSUM 0x00020000
-#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM 0x00040000
-#define DEV_RX_OFFLOAD_RSS_HASH 0x00080000
+#define RTE_ETH_RX_OFFLOAD_TIMESTAMP 0x00004000
+#define DEV_RX_OFFLOAD_TIMESTAMP RTE_ETH_RX_OFFLOAD_TIMESTAMP
+#define RTE_ETH_RX_OFFLOAD_SECURITY 0x00008000
+#define DEV_RX_OFFLOAD_SECURITY RTE_ETH_RX_OFFLOAD_SECURITY
+#define RTE_ETH_RX_OFFLOAD_KEEP_CRC 0x00010000
+#define DEV_RX_OFFLOAD_KEEP_CRC RTE_ETH_RX_OFFLOAD_KEEP_CRC
+#define RTE_ETH_RX_OFFLOAD_SCTP_CKSUM 0x00020000
+#define DEV_RX_OFFLOAD_SCTP_CKSUM RTE_ETH_RX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM 0x00040000
+#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_RSS_HASH 0x00080000
+#define DEV_RX_OFFLOAD_RSS_HASH RTE_ETH_RX_OFFLOAD_RSS_HASH
#define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
-#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
- DEV_RX_OFFLOAD_UDP_CKSUM | \
- DEV_RX_OFFLOAD_TCP_CKSUM)
-#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_VLAN_EXTEND | \
- DEV_RX_OFFLOAD_QINQ_STRIP)
+#define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_CHECKSUM RTE_ETH_RX_OFFLOAD_CHECKSUM
+#define RTE_ETH_RX_OFFLOAD_VLAN (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+#define DEV_RX_OFFLOAD_VLAN RTE_ETH_RX_OFFLOAD_VLAN
/*
* If new Rx offload capabilities are defined, they also must be
@@ -1390,52 +1593,74 @@ struct rte_eth_conf {
/**
* TX offload capabilities of a device.
*/
-#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
-#define DEV_TX_OFFLOAD_IPV4_CKSUM 0x00000002
-#define DEV_TX_OFFLOAD_UDP_CKSUM 0x00000004
-#define DEV_TX_OFFLOAD_TCP_CKSUM 0x00000008
-#define DEV_TX_OFFLOAD_SCTP_CKSUM 0x00000010
-#define DEV_TX_OFFLOAD_TCP_TSO 0x00000020
-#define DEV_TX_OFFLOAD_UDP_TSO 0x00000040
-#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_QINQ_INSERT 0x00000100
-#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO 0x00000200 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GRE_TNL_TSO 0x00000400 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_IPIP_TNL_TSO 0x00000800 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO 0x00001000 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_MACSEC_INSERT 0x00002000
-#define DEV_TX_OFFLOAD_MT_LOCKFREE 0x00004000
+#define RTE_ETH_TX_OFFLOAD_VLAN_INSERT 0x00000001
+#define DEV_TX_OFFLOAD_VLAN_INSERT RTE_ETH_TX_OFFLOAD_VLAN_INSERT
+#define RTE_ETH_TX_OFFLOAD_IPV4_CKSUM 0x00000002
+#define DEV_TX_OFFLOAD_IPV4_CKSUM RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_UDP_CKSUM 0x00000004
+#define DEV_TX_OFFLOAD_UDP_CKSUM RTE_ETH_TX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_CKSUM 0x00000008
+#define DEV_TX_OFFLOAD_TCP_CKSUM RTE_ETH_TX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_SCTP_CKSUM 0x00000010
+#define DEV_TX_OFFLOAD_SCTP_CKSUM RTE_ETH_TX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_TSO 0x00000020
+#define DEV_TX_OFFLOAD_TCP_TSO RTE_ETH_TX_OFFLOAD_TCP_TSO
+#define RTE_ETH_TX_OFFLOAD_UDP_TSO 0x00000040
+#define DEV_TX_OFFLOAD_UDP_TSO RTE_ETH_TX_OFFLOAD_UDP_TSO
+#define RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_QINQ_INSERT 0x00000100
+#define DEV_TX_OFFLOAD_QINQ_INSERT RTE_ETH_TX_OFFLOAD_QINQ_INSERT
+#define RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO 0x00000200 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO 0x00000400 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GRE_TNL_TSO RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO 0x00000800 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_IPIP_TNL_TSO RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO 0x00001000 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_MACSEC_INSERT 0x00002000
+#define DEV_TX_OFFLOAD_MACSEC_INSERT RTE_ETH_TX_OFFLOAD_MACSEC_INSERT
+#define RTE_ETH_TX_OFFLOAD_MT_LOCKFREE 0x00004000
+#define DEV_TX_OFFLOAD_MT_LOCKFREE RTE_ETH_TX_OFFLOAD_MT_LOCKFREE
/**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
* tx queue without SW lock.
*/
-#define DEV_TX_OFFLOAD_MULTI_SEGS 0x00008000
+#define RTE_ETH_TX_OFFLOAD_MULTI_SEGS 0x00008000
+#define DEV_TX_OFFLOAD_MULTI_SEGS RTE_ETH_TX_OFFLOAD_MULTI_SEGS
/**< Device supports multi segment send. */
-#define DEV_TX_OFFLOAD_MBUF_FAST_FREE 0x00010000
+#define RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE 0x00010000
+#define DEV_TX_OFFLOAD_MBUF_FAST_FREE RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
/**< Device supports optimization for fast release of mbufs.
* When set application must guarantee that per-queue all mbufs comes from
* the same mempool and has refcnt = 1.
*/
-#define DEV_TX_OFFLOAD_SECURITY 0x00020000
+#define RTE_ETH_TX_OFFLOAD_SECURITY 0x00020000
+#define DEV_TX_OFFLOAD_SECURITY RTE_ETH_TX_OFFLOAD_SECURITY
/**
* Device supports generic UDP tunneled packet TSO.
* Application must set PKT_TX_TUNNEL_UDP and other mbuf fields required
* for tunnel TSO.
*/
-#define DEV_TX_OFFLOAD_UDP_TNL_TSO 0x00040000
+#define RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO 0x00040000
+#define DEV_TX_OFFLOAD_UDP_TNL_TSO RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO
/**
* Device supports generic IP tunneled packet TSO.
* Application must set PKT_TX_TUNNEL_IP and other mbuf fields required
* for tunnel TSO.
*/
-#define DEV_TX_OFFLOAD_IP_TNL_TSO 0x00080000
+#define RTE_ETH_TX_OFFLOAD_IP_TNL_TSO 0x00080000
+#define DEV_TX_OFFLOAD_IP_TNL_TSO RTE_ETH_TX_OFFLOAD_IP_TNL_TSO
/** Device supports outer UDP checksum */
-#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM 0x00100000
+#define RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM 0x00100000
+#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM
/**
* Device sends on time read from RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
* if RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME is set in ol_flags.
* The mbuf field and flag are registered when the offload is configured.
*/
-#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP
/*
* If new Tx offload capabilities are defined, they also must be
* mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1672,8 +1897,10 @@ struct rte_eth_xstat_name {
char name[RTE_ETH_XSTATS_NAME_SIZE]; /**< The statistic name. */
};
-#define ETH_DCB_NUM_TCS 8
-#define ETH_MAX_VMDQ_POOL 64
+#define RTE_ETH_DCB_NUM_TCS 8
+#define ETH_DCB_NUM_TCS RTE_ETH_DCB_NUM_TCS
+#define RTE_ETH_MAX_VMDQ_POOL 64
+#define ETH_MAX_VMDQ_POOL RTE_ETH_MAX_VMDQ_POOL
/**
* A structure used to get the information of queue and
@@ -1749,13 +1976,17 @@ struct rte_eth_fec_capa {
*/
/**< l2 tunnel enable mask */
-#define ETH_L2_TUNNEL_ENABLE_MASK 0x00000001
+#define RTE_ETH_L2_TUNNEL_ENABLE_MASK 0x00000001
+#define ETH_L2_TUNNEL_ENABLE_MASK RTE_ETH_L2_TUNNEL_ENABLE_MASK
/**< l2 tunnel insertion mask */
-#define ETH_L2_TUNNEL_INSERTION_MASK 0x00000002
+#define RTE_ETH_L2_TUNNEL_INSERTION_MASK 0x00000002
+#define ETH_L2_TUNNEL_INSERTION_MASK RTE_ETH_L2_TUNNEL_INSERTION_MASK
/**< l2 tunnel stripping mask */
-#define ETH_L2_TUNNEL_STRIPPING_MASK 0x00000004
+#define RTE_ETH_L2_TUNNEL_STRIPPING_MASK 0x00000004
+#define ETH_L2_TUNNEL_STRIPPING_MASK RTE_ETH_L2_TUNNEL_STRIPPING_MASK
/**< l2 tunnel forwarding mask */
-#define ETH_L2_TUNNEL_FORWARDING_MASK 0x00000008
+#define RTE_ETH_L2_TUNNEL_FORWARDING_MASK 0x00000008
+#define ETH_L2_TUNNEL_FORWARDING_MASK RTE_ETH_L2_TUNNEL_FORWARDING_MASK
/**
* Function type used for RX packet processing packet callbacks.
@@ -2075,7 +2306,7 @@ uint16_t rte_eth_dev_count_total(void);
uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
/**
- * Get DEV_RX_OFFLOAD_* flag name.
+ * Get RTE_ETH_RX_OFFLOAD_* flag name.
*
* @param offload
* Offload flag.
@@ -2085,7 +2316,7 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
const char *rte_eth_dev_rx_offload_name(uint64_t offload);
/**
- * Get DEV_TX_OFFLOAD_* flag name.
+ * Get RTE_ETH_TX_OFFLOAD_* flag name.
*
* @param offload
* Offload flag.
@@ -2179,7 +2410,7 @@ rte_eth_dev_is_removed(uint16_t port_id);
* of the Prefetch, Host, and Write-Back threshold registers of the receive
* ring.
* In addition it contains the hardware offloads features to activate using
- * the DEV_RX_OFFLOAD_* flags.
+ * the RTE_ETH_RX_OFFLOAD_* flags.
* If an offloading set in rx_conf->offloads
* hasn't been set in the input argument eth_conf->rxmode.offloads
* to rte_eth_dev_configure(), it is a new added offloading, it must be
@@ -5231,7 +5462,7 @@ static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
* rte_eth_tx_burst() function must [attempt to] free the *rte_mbuf* buffers
* of those packets whose transmission was effectively completed.
*
- * If the PMD is DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+ * If the PMD is RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
* invoke this function concurrently on the same tx queue without SW lock.
* @see rte_eth_dev_info_get, struct rte_eth_txconf::offloads
*
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc2e..8e6156a62aa9 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -154,7 +154,7 @@ struct rte_eth_dev_data {
/**< Device Ethernet link address.
* @see rte_eth_dev_release_port()
*/
- uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+ uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
/**< Bitmap associating MAC addresses to pools. */
struct rte_ether_addr *hash_mac_addrs;
/**< Device Ethernet MAC addresses of hash filtering.
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 70f455d47d60..4152067368b8 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2593,7 +2593,7 @@ struct rte_flow_action_rss {
* through.
*/
uint32_t level;
- uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
uint32_t key_len; /**< Hash key length in bytes. */
uint32_t queue_num; /**< Number of entries in @p queue. */
const uint8_t *key; /**< Hash key. */
diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c
index 0d02ec3cee05..119fdcac0b7f 100644
--- a/lib/gso/rte_gso.c
+++ b/lib/gso/rte_gso.c
@@ -15,13 +15,13 @@
#include "gso_udp4.h"
#define ILLEGAL_UDP_GSO_CTX(ctx) \
- ((((ctx)->gso_types & DEV_TX_OFFLOAD_UDP_TSO) == 0) || \
+ ((((ctx)->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO) == 0) || \
(ctx)->gso_size < RTE_GSO_UDP_SEG_SIZE_MIN)
#define ILLEGAL_TCP_GSO_CTX(ctx) \
- ((((ctx)->gso_types & (DEV_TX_OFFLOAD_TCP_TSO | \
- DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
- DEV_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
+ ((((ctx)->gso_types & (RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
(ctx)->gso_size < RTE_GSO_SEG_SIZE_MIN)
int
@@ -54,28 +54,28 @@ rte_gso_segment(struct rte_mbuf *pkt,
ol_flags = pkt->ol_flags;
if ((IS_IPV4_VXLAN_TCP4(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
((IS_IPV4_GRE_TCP4(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_GRE_TNL_TSO)))) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))) {
pkt->ol_flags &= (~PKT_TX_TCP_SEG);
ret = gso_tunnel_tcp4_segment(pkt, gso_size, ipid_delta,
direct_pool, indirect_pool,
pkts_out, nb_pkts_out);
} else if (IS_IPV4_VXLAN_UDP4(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) &&
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
pkt->ol_flags &= (~PKT_TX_UDP_SEG);
ret = gso_tunnel_udp4_segment(pkt, gso_size,
direct_pool, indirect_pool,
pkts_out, nb_pkts_out);
} else if (IS_IPV4_TCP(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_TCP_TSO)) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
pkt->ol_flags &= (~PKT_TX_TCP_SEG);
ret = gso_tcp4_segment(pkt, gso_size, ipid_delta,
direct_pool, indirect_pool,
pkts_out, nb_pkts_out);
} else if (IS_IPV4_UDP(pkt->ol_flags) &&
- (gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+ (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
pkt->ol_flags &= (~PKT_TX_UDP_SEG);
ret = gso_udp4_segment(pkt, gso_size, direct_pool,
indirect_pool, pkts_out, nb_pkts_out);
diff --git a/lib/gso/rte_gso.h b/lib/gso/rte_gso.h
index d93ee8e5b171..0a65afc11e64 100644
--- a/lib/gso/rte_gso.h
+++ b/lib/gso/rte_gso.h
@@ -52,11 +52,11 @@ struct rte_gso_ctx {
uint32_t gso_types;
/**< the bit mask of required GSO types. The GSO library
* uses the same macros as that of describing device TX
- * offloading capabilities (i.e. DEV_TX_OFFLOAD_*_TSO) for
+ * offloading capabilities (i.e. RTE_ETH_TX_OFFLOAD_*_TSO) for
* gso_types.
*
* For example, if applications want to segment TCP/IPv4
- * packets, set DEV_TX_OFFLOAD_TCP_TSO in gso_types.
+ * packets, set RTE_ETH_TX_OFFLOAD_TCP_TSO in gso_types.
*/
uint16_t gso_size;
/**< maximum size of an output GSO segment, including packet
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index bb38d7f58102..50e611e887bf 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -192,7 +192,7 @@ extern "C" {
* The detection of PKT_RX_OUTER_L4_CKSUM_GOOD shall be based on the given
* HW capability, At minimum, the PMD should support
* PKT_RX_OUTER_L4_CKSUM_UNKNOWN and PKT_RX_OUTER_L4_CKSUM_BAD states
- * if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
+ * if the RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
*/
#define PKT_RX_OUTER_L4_CKSUM_MASK ((1ULL << 21) | (1ULL << 22))
@@ -215,7 +215,7 @@ extern "C" {
* a) Fill outer_l2_len and outer_l3_len in mbuf.
* b) Set the PKT_TX_OUTER_UDP_CKSUM flag.
* c) Set the PKT_TX_OUTER_IPV4 or PKT_TX_OUTER_IPV6 flag.
- * 2) Configure DEV_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
+ * 2) Configure RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
*/
#define PKT_TX_OUTER_UDP_CKSUM (1ULL << 41)
@@ -258,7 +258,7 @@ extern "C" {
* It can be used for tunnels which are not standards or listed above.
* It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_GRE
* or PKT_TX_TUNNEL_IPIP if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_IP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_IP_TNL_TSO.
* Outer and inner checksums are done according to the existing flags like
* PKT_TX_xxx_CKSUM.
* Specific tunnel headers that contain payload length, sequence id
@@ -271,7 +271,7 @@ extern "C" {
* It can be used for tunnels which are not standards or listed above.
* It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_VXLAN
* if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_UDP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO.
* Outer and inner checksums are done according to the existing flags like
* PKT_TX_xxx_CKSUM.
* Specific tunnel headers that contain payload length, sequence id
diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
index 13f06d8ed25b..be43f8c328e1 100644
--- a/lib/mbuf/rte_mbuf_dyn.h
+++ b/lib/mbuf/rte_mbuf_dyn.h
@@ -37,7 +37,7 @@
* of the dynamic field to be registered:
* const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
* - The application initializes the PMD, and asks for this feature
- * at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
+ * at port initialization by passing RTE_ETH_RX_OFFLOAD_MY_FEATURE in
* rxconf. This will make the PMD to register the field by calling
* rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
* stores the returned offset.
--
2.31.1
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [EXT] [PATCH] doc: announce library refactor for ABI improvement
2021-08-26 11:04 4% ` Bruce Richardson
@ 2021-08-26 15:44 4% ` Andrew Rybchenko
2021-08-31 15:48 4% ` Kinsella, Ray
0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-08-26 15:44 UTC (permalink / raw)
To: Bruce Richardson, Akhil Goyal
Cc: Ferruh Yigit, Ray Kinsella, dev, Konstantin Ananyev,
Jerin Jacob Kollanukkaran
On 8/26/21 2:04 PM, Bruce Richardson wrote:
> On Thu, Aug 26, 2021 at 10:46:35AM +0000, Akhil Goyal wrote:
>>> Target is to reduce the public interface surface to improve the ABI
>>> stability and this is preparation for the longer term stable ABI
>>> support.
>>>
>>> Mainly device abstraction layer libraries are impacted because they have
>>> two interfaces, one is public interface to the applications and other is
>>> internal interface to the drivers. Some driver/internal interface
>>> structures/symbols are in the public interface by mistake, this work is
>>> to clean them.
>>> Also some libraries has 'static inline' functions for performance
>>> reasons (like ones in the ethdev), this work plans to split the
>>> structures and hide the part that is not used by inline functions.
>>>
>>> The need of the work for the stable ABI already discussed and planned by
>>> the DPDK technical board:
>>> https://mails.dpdk.org/archives/dev/2021-July/214662.html
>>>
>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>> ---
>> Acked-by: Akhil Goyal <gakhil@marvell.com>
>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [RFC 3/7] eal/interrupts: avoid direct access to interrupt handle
2021-08-26 14:57 4% [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
@ 2021-08-26 14:57 1% ` Harman Kalra
2021-09-03 12:40 4% ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
1 sibling, 0 replies; 200+ results
From: Harman Kalra @ 2021-08-26 14:57 UTC (permalink / raw)
To: dev, Harman Kalra, Bruce Richardson
Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field. Direct access to any of the fields
should be avoided to avoid any ABI breakage in future.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/freebsd/eal_interrupts.c | 93 ++++++----
lib/eal/linux/eal_interrupts.c | 294 +++++++++++++++++++------------
2 files changed, 241 insertions(+), 146 deletions(-)
diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 86810845fe..5724948d81 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -40,7 +40,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -60,7 +60,7 @@ static int
intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
{
/* alarm callbacks are special case */
- if (ih->type == RTE_INTR_HANDLE_ALARM) {
+ if (rte_intr_handle_type_get(ih) == RTE_INTR_HANDLE_ALARM) {
uint64_t timeout_ns;
/* get soonest alarm timeout */
@@ -89,7 +89,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
int ret = 0, add_event = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -103,7 +104,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* find the source for this intr_handle */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
}
@@ -112,8 +114,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
* thing on the list should be eal_alarm_callback() and we may
* be called just to reset the timer.
*/
- if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM &&
- !TAILQ_EMPTY(&src->callbacks)) {
+ if (src != NULL && rte_intr_handle_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM && !TAILQ_EMPTY(&src->callbacks)) {
callback = NULL;
} else {
/* allocate a new interrupt callback entity */
@@ -135,10 +137,20 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
ret = -ENOMEM;
goto fail;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
- }
+ src->intr_handle =
+ rte_intr_handle_instance_alloc(
+ RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_handle_instance_index_set(
+ src->intr_handle, intr_handle, 0);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&intr_sources, src,
+ next);
+ }
}
/* we had no interrupts for this */
@@ -151,7 +163,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* add events to the queue. timer events are special as we need to
* re-set the timer.
*/
- if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) {
+ if (add_event || rte_intr_handle_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM) {
struct kevent ke;
memset(&ke, 0, sizeof(ke));
@@ -173,12 +186,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
*/
if (errno == ENODEV)
RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
- src->intr_handle.fd);
+ rte_intr_handle_fd_get(src->intr_handle));
else
RTE_LOG(ERR, EAL, "Error adding fd %d "
- "kevent, %s\n",
- src->intr_handle.fd,
- strerror(errno));
+ "kevent, %s\n",
+ rte_intr_handle_fd_get(
+ src->intr_handle),
+ strerror(errno));
ret = -errno;
goto fail;
}
@@ -213,7 +227,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -228,7 +242,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -268,7 +283,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -282,7 +297,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -314,7 +330,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_handle_fd_get(src->intr_handle),
+ strerror(errno));
/* removing non-existent even is an expected condition
* in some circumstances (e.g. oneshot events).
*/
@@ -365,17 +382,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_handle_fd_get(intr_handle) < 0 ||
+ rte_intr_handle_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -388,7 +406,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
rc = -1;
break;
}
@@ -406,17 +424,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_handle_fd_get(intr_handle) < 0 ||
+ rte_intr_handle_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -429,7 +448,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
rc = -1;
break;
}
@@ -441,7 +460,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (intr_handle &&
+ rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 0;
return -1;
@@ -463,7 +483,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == event_fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ event_fd)
break;
if (src == NULL) {
rte_spinlock_unlock(&intr_lock);
@@ -475,7 +496,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_handle_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_ALARM:
bytes_read = 0;
call = true;
@@ -546,7 +567,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
/* mark for deletion from the queue */
ke.flags = EV_DELETE;
- if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) {
+ if (intr_source_to_kevent(src->intr_handle,
+ &ke) < 0) {
RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
rte_spinlock_unlock(&intr_lock);
return;
@@ -557,7 +579,9 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
- "%s\n", src->intr_handle.fd,
+ "%s\n",
+ rte_intr_handle_fd_get(
+ src->intr_handle),
strerror(errno));
/* removing non-existent even is an expected
* condition in some circumstances
@@ -567,7 +591,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
}
}
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 22b3b7bcd9..570eddf088 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -20,6 +20,7 @@
#include <stdbool.h>
#include <rte_common.h>
+#include <rte_epoll.h>
#include <rte_interrupts.h>
#include <rte_memory.h>
#include <rte_launch.h>
@@ -82,7 +83,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -112,7 +113,7 @@ static int
vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
int *fd_ptr;
len = sizeof(irq_set_buf);
@@ -125,13 +126,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_handle_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -144,11 +146,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
return 0;
@@ -159,7 +161,7 @@ static int
vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -171,11 +173,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -187,11 +190,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL,
- "Error disabling INTx interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling INTx interrupts for fd %d\n",
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
return 0;
@@ -202,6 +206,7 @@ static int
vfio_ack_intx(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set irq_set;
+ int vfio_dev_fd;
/* unmask INTx */
memset(&irq_set, 0, sizeof(irq_set));
@@ -211,9 +216,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
irq_set.index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set.start = 0;
- if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
return 0;
@@ -225,7 +231,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -236,13 +242,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_handle_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
return 0;
@@ -253,7 +260,7 @@ static int
vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -264,11 +271,13 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI interrupts for fd %d\n",
+ rte_intr_handle_fd_get(intr_handle));
return ret;
}
@@ -279,30 +288,34 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd, i;
len = sizeof(irq_set_buf);
irq_set = (struct vfio_irq_set *) irq_set_buf;
irq_set->argsz = len;
/* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */
- irq_set->count = intr_handle->max_intr ?
- (intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ?
- RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1;
+ irq_set->count = rte_intr_handle_max_intr_get(intr_handle) ?
+ (rte_intr_handle_max_intr_get(intr_handle) >
+ RTE_MAX_RXTX_INTR_VEC_ID + 1 ? RTE_MAX_RXTX_INTR_VEC_ID + 1 :
+ rte_intr_handle_max_intr_get(intr_handle)) : 1;
+
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
/* INTR vector offset 0 reserve for non-efds mapping */
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd;
- memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds,
- sizeof(*intr_handle->efds) * intr_handle->nb_efd);
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_handle_fd_get(intr_handle);
+ for (i = 0; i < rte_intr_handle_nb_efd_get(intr_handle); i++)
+ fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] =
+ rte_intr_handle_efds_index_get(intr_handle, i);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -314,7 +327,7 @@ static int
vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -325,11 +338,13 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI-X interrupts for fd %d\n",
+ rte_intr_handle_fd_get(intr_handle));
return ret;
}
@@ -342,7 +357,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -354,13 +369,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_handle_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -373,7 +389,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -384,11 +400,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return ret;
}
@@ -399,20 +416,22 @@ static int
uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* disable interrupts */
command_high |= 0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -423,20 +442,22 @@ static int
uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* enable interrupts */
command_high &= ~0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -448,10 +469,11 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
{
const int value = 0;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_handle_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_handle_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -462,10 +484,11 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
{
const int value = 1;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_handle_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_handle_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -482,7 +505,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
wake_thread = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -503,7 +527,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* check if there is at least one callback registered for the fd */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd) {
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle)) {
/* we had no interrupts for this */
if (TAILQ_EMPTY(&src->callbacks))
wake_thread = 1;
@@ -522,12 +547,22 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
free(callback);
ret = -ENOMEM;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
- wake_thread = 1;
- ret = 0;
+ src->intr_handle = rte_intr_handle_instance_alloc(
+ RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_handle_instance_index_set(
+ src->intr_handle, intr_handle, 0);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&(src->callbacks), callback,
+ next);
+ TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ wake_thread = 1;
+ ret = 0;
+ }
}
}
@@ -555,7 +590,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -565,7 +600,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -605,7 +641,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -615,7 +651,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -646,6 +683,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_handle_instance_free(src->intr_handle);
free(src);
}
}
@@ -677,22 +715,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
int
rte_intr_enable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (rte_intr_handle_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* write to the uio fd to enable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -734,7 +773,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
rc = -1;
break;
}
@@ -757,13 +796,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ int uio_cfg_fd;
+
+ if (intr_handle && rte_intr_handle_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VDEV)
return 0;
- if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0)
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (!intr_handle || rte_intr_handle_fd_get(intr_handle) < 0 ||
+ uio_cfg_fd < 0)
return -1;
- switch (intr_handle->type) {
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* Both acking and enabling are same for UIO */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -796,7 +840,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
/* unknown handle type */
default:
RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -806,22 +850,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
int
rte_intr_disable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (rte_intr_handle_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* write to the uio fd to disable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_disable(intr_handle))
@@ -863,7 +908,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
rc = -1;
break;
}
@@ -896,7 +941,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
}
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd ==
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
events[n].data.fd)
break;
if (src == NULL){
@@ -909,7 +954,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_handle_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -973,6 +1018,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
free(cb);
}
+ rte_intr_handle_instance_free(src->intr_handle);
free(src);
return -1;
} else if (bytes_read == 0)
@@ -1012,7 +1058,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
if (cb->pending_delete) {
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
rv++;
}
@@ -1021,6 +1068,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_handle_instance_free(src->intr_handle);
free(src);
}
@@ -1123,16 +1171,18 @@ eal_intr_thread_main(__rte_unused void *arg)
continue; /* skip those with no callbacks */
memset(&ev, 0, sizeof(ev));
ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP;
- ev.data.fd = src->intr_handle.fd;
+ ev.data.fd = rte_intr_handle_fd_get(src->intr_handle);
/**
* add all the uio device file descriptor
* into wait list.
*/
if (epoll_ctl(pfd, EPOLL_CTL_ADD,
- src->intr_handle.fd, &ev) < 0){
+ rte_intr_handle_fd_get(src->intr_handle),
+ &ev) < 0) {
rte_panic("Error adding fd %d epoll_ctl, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_handle_fd_get(src->intr_handle),
+ strerror(errno));
}
else
numfds++;
@@ -1185,7 +1235,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
int bytes_read = 0;
int nbytes;
- switch (intr_handle->type) {
+ switch (rte_intr_handle_type_get(intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -1198,7 +1248,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
break;
#endif
case RTE_INTR_HANDLE_VDEV:
- bytes_read = intr_handle->efd_counter_size;
+ bytes_read = rte_intr_handle_efd_counter_size_get(intr_handle);
/* For vdev, number of bytes to read is set by driver */
break;
case RTE_INTR_HANDLE_EXT:
@@ -1419,8 +1469,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- if (!intr_handle || intr_handle->nb_efd == 0 ||
- efd_idx >= intr_handle->nb_efd) {
+ if (!intr_handle || rte_intr_handle_nb_efd_get(intr_handle) == 0 ||
+ efd_idx >= (unsigned int)rte_intr_handle_nb_efd_get(intr_handle)) {
RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
return -EPERM;
}
@@ -1428,7 +1478,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
switch (op) {
case RTE_INTR_EVENT_ADD:
epfd_op = EPOLL_CTL_ADD;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_handle_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) != RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event already been added.\n");
@@ -1442,7 +1492,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr;
epdata->cb_arg = (void *)intr_handle;
rc = rte_epoll_ctl(epfd, epfd_op,
- intr_handle->efds[efd_idx], rev);
+ rte_intr_handle_efds_index_get(intr_handle,
+ efd_idx),
+ rev);
if (!rc)
RTE_LOG(DEBUG, EAL,
"efd %d associated with vec %d added on epfd %d"
@@ -1452,7 +1504,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
break;
case RTE_INTR_EVENT_DEL:
epfd_op = EPOLL_CTL_DEL;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_handle_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event does not exist.\n");
@@ -1477,8 +1529,9 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
uint32_t i;
struct rte_epoll_event *rev;
- for (i = 0; i < intr_handle->nb_efd; i++) {
- rev = &intr_handle->elist[i];
+ for (i = 0; i < (uint32_t)rte_intr_handle_nb_efd_get(intr_handle);
+ i++) {
+ rev = rte_intr_handle_elist_index_get(intr_handle, i);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID)
continue;
@@ -1498,7 +1551,8 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
assert(nb_efd != 0);
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) {
+ if (rte_intr_handle_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VFIO_MSIX) {
for (i = 0; i < n; i++) {
fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
if (fd < 0) {
@@ -1507,21 +1561,34 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
errno, strerror(errno));
return -errno;
}
- intr_handle->efds[i] = fd;
+
+ if (rte_intr_handle_efds_index_set(intr_handle, i, fd))
+ return -rte_errno;
}
- intr_handle->nb_efd = n;
- intr_handle->max_intr = NB_OTHER_INTR + n;
- } else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+
+ if (rte_intr_handle_nb_efd_set(intr_handle, n))
+ return -rte_errno;
+
+ if (rte_intr_handle_max_intr_set(intr_handle,
+ NB_OTHER_INTR + n))
+ return -rte_errno;
+ } else if (rte_intr_handle_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VDEV) {
/* only check, initialization would be done in vdev driver.*/
- if (intr_handle->efd_counter_size >
- sizeof(union rte_intr_read_buffer)) {
+ if ((uint64_t)rte_intr_handle_efd_counter_size_get(intr_handle)
+ > sizeof(union rte_intr_read_buffer)) {
RTE_LOG(ERR, EAL, "the efd_counter_size is oversized");
return -EINVAL;
}
} else {
- intr_handle->efds[0] = intr_handle->fd;
- intr_handle->nb_efd = RTE_MIN(nb_efd, 1U);
- intr_handle->max_intr = NB_OTHER_INTR;
+ if (rte_intr_handle_efds_index_set(intr_handle, 0,
+ rte_intr_handle_fd_get(intr_handle)))
+ return -rte_errno;
+ if (rte_intr_handle_nb_efd_set(intr_handle,
+ RTE_MIN(nb_efd, 1U)))
+ return -rte_errno;
+ if (rte_intr_handle_max_intr_set(intr_handle, NB_OTHER_INTR))
+ return -rte_errno;
}
return 0;
@@ -1533,18 +1600,20 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
uint32_t i;
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->max_intr > intr_handle->nb_efd) {
- for (i = 0; i < intr_handle->nb_efd; i++)
- close(intr_handle->efds[i]);
+ if (rte_intr_handle_max_intr_get(intr_handle) >
+ rte_intr_handle_nb_efd_get(intr_handle)) {
+ for (i = 0; i <
+ (uint32_t)rte_intr_handle_nb_efd_get(intr_handle); i++)
+ close(rte_intr_handle_efds_index_get(intr_handle, i));
}
- intr_handle->nb_efd = 0;
- intr_handle->max_intr = 0;
+ rte_intr_handle_nb_efd_set(intr_handle, 0);
+ rte_intr_handle_max_intr_set(intr_handle, 0);
}
int
rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
{
- return !(!intr_handle->nb_efd);
+ return !(!rte_intr_handle_nb_efd_get(intr_handle));
}
int
@@ -1553,16 +1622,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
if (!rte_intr_dp_is_en(intr_handle))
return 1;
else
- return !!(intr_handle->max_intr - intr_handle->nb_efd);
+ return !!(rte_intr_handle_max_intr_get(intr_handle) -
+ rte_intr_handle_nb_efd_get(intr_handle));
}
int
rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
{
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX)
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX)
return 1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 1;
return 0;
--
2.18.0
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [RFC 0/7] make rte_intr_handle internal
@ 2021-08-26 14:57 4% Harman Kalra
2021-08-26 14:57 1% ` [dpdk-dev] [RFC 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
2021-09-03 12:40 4% ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
0 siblings, 2 replies; 200+ results
From: Harman Kalra @ 2021-08-26 14:57 UTC (permalink / raw)
To: dev; +Cc: Harman Kalra
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.
Details on each patch of the series:
Patch 1: eal: interrupt handle API prototypes
This patch provides prototypes of all the new get set APIs, and
also rearranges the headers related to interrupt framework. Epoll
related definitions prototypes are moved into a new header i.e.
rte_epoll.h and APIs defined in rte_eal_interrupts.h which were
driver specific are moved to rte_interrupts.h (as anyways it was
accessible and used outside DPDK library. Later in the series
rte_eal_interrupts.h is removed.
Patch 2: eal/interrupts: implement get set APIs
Implementing all get, set and alloc APIs. Alloc APIs are implemented
to allocate memory for interrupt handle instance. Currently most of
the drivers defines interrupt handle instance as static but now it cant
be static as size of rte_intr_handle is unknown to all the drivers.
Drivers are expected to allocate interrupt instances during initialization
and free these instances during cleanup phase.
Patch 3: eal/interrupts: avoid direct access to interrupt handle
Modifying the interrupt framework for linux and freebsd to use these
get set alloc APIs as per requirement and avoid accessing the fields
directly.
Patch 4: test/interrupt: apply get set interrupt handle APIs
Updating interrupt test suite to use interrupt handle APIs.
Patch 5: drivers: remove direct access to interrupt handle fields
Modifying all the drivers and libraries which are currently directly
accessing the interrupt handle fields. Drivers are expected to
allocated the interrupt instance, use get set APIs with the allocated
interrupt handle and free it on cleanup.
Patch 6: eal/interrupts: make interrupt handle structure opaque
In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
definition is moved to c file to make it completely opaque. As part of
interrupt handle allocation, array like efds and elist(which are currently
static) are dynamically allocated with default size
(RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
device requirement using new API rte_intr_handle_event_list_update().
Eg, on PCI device probing MSIX size can be queried and these arrays can
be reallocated accordingly.
Patch 7: eal/alarm: introduce alarm fini routine
Introducing alarm fini routine, as the memory allocated for alarm interrupt
instance can be freed in alarm fini.
Testing performed:
1. Validated the series by running interrupts and alarm test suite.
2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
where interrupts are expected on packet arrival.
Harman Kalra (7):
eal: interrupt handle API prototypes
eal/interrupts: implement get set APIs
eal/interrupts: avoid direct access to interrupt handle
test/interrupt: apply get set interrupt handle APIs
drivers: remove direct access to interrupt handle fields
eal/interrupts: make interrupt handle structure opaque
eal/alarm: introduce alarm fini routine
MAINTAINERS | 1 +
app/test/test_interrupts.c | 237 +++---
drivers/baseband/acc100/rte_acc100_pmd.c | 18 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 13 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 14 +-
drivers/bus/auxiliary/auxiliary_common.c | 2 +
drivers/bus/auxiliary/linux/auxiliary.c | 11 +
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 17 +-
drivers/bus/fslmc/fslmc_vfio.c | 32 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 21 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 16 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 73 +-
drivers/bus/pci/linux/pci_vfio.c | 115 ++-
drivers/bus/pci/pci_common.c | 29 +-
drivers/bus/pci/pci_common_uio.c | 21 +-
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 7 +
drivers/bus/vmbus/linux/vmbus_uio.c | 37 +-
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 24 +-
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +-
drivers/common/cnxk/roc_irq.c | 106 +--
drivers/common/cnxk/roc_nix_irq.c | 37 +-
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 34 +
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +-
drivers/common/octeontx2/otx2_irq.c | 117 +--
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 22 +-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 32 +-
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 47 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 24 +-
drivers/net/e1000/igb_ethdev.c | 84 ++-
drivers/net/ena/ena_ethdev.c | 36 +-
drivers/net/enic/enic_main.c | 27 +-
drivers/net/failsafe/failsafe.c | 24 +-
drivers/net/failsafe/failsafe_intr.c | 45 +-
drivers/net/failsafe/failsafe_ops.c | 23 +-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 50 +-
drivers/net/hns3/hns3_ethdev_vf.c | 57 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 55 +-
drivers/net/i40e/i40e_ethdev_vf.c | 43 +-
drivers/net/iavf/iavf_ethdev.c | 41 +-
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 23 +-
drivers/net/ice/ice_ethdev.c | 51 +-
drivers/net/igc/igc_ethdev.c | 47 +-
drivers/net/ionic/ionic_ethdev.c | 12 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 70 +-
drivers/net/memif/memif_socket.c | 99 ++-
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 63 +-
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 20 +-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 48 +-
drivers/net/mlx5/linux/mlx5_os.c | 56 +-
drivers/net/mlx5/linux/mlx5_socket.c | 26 +-
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 43 +-
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 27 +-
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_net.c | 42 +-
drivers/net/ngbe/ngbe_ethdev.c | 31 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
drivers/net/qede/qede_ethdev.c | 16 +-
drivers/net/sfc/sfc_intr.c | 29 +-
drivers/net/tap/rte_eth_tap.c | 37 +-
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 33 +-
drivers/net/thunderx/nicvf_ethdev.c | 13 +
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 36 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 35 +-
drivers/net/vhost/rte_eth_vhost.c | 78 +-
drivers/net/virtio/virtio_ethdev.c | 17 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 53 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 45 +-
drivers/raw/ifpga/ifpga_rawdev.c | 42 +-
drivers/raw/ntb/ntb.c | 10 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 11 +
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 46 +-
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/common/eal_common_interrupts.c | 668 +++++++++++++++++
lib/eal/common/eal_private.h | 11 +
lib/eal/common/meson.build | 2 +
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 57 +-
lib/eal/freebsd/eal_interrupts.c | 93 ++-
lib/eal/include/meson.build | 2 +-
lib/eal/include/rte_eal_interrupts.h | 269 -------
lib/eal/include/rte_eal_trace.h | 24 +-
lib/eal/include/rte_epoll.h | 117 +++
lib/eal/include/rte_interrupts.h | 673 +++++++++++++++++-
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 39 +-
lib/eal/linux/eal_dev.c | 65 +-
lib/eal/linux/eal_interrupts.c | 294 +++++---
lib/eal/version.map | 30 +
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +-
129 files changed, 3763 insertions(+), 1672 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
create mode 100644 lib/eal/include/rte_epoll.h
--
2.18.0
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [RFC 0/7] hide eth dev related structures
2021-08-20 16:28 3% [dpdk-dev] [RFC 0/7] hide eth dev related structures Konstantin Ananyev
2021-08-20 16:28 2% ` [dpdk-dev] [RFC 1/7] eth: move ethdev 'burst' API into separate structure Konstantin Ananyev
@ 2021-08-26 12:37 3% ` Jerin Jacob
2021-09-06 18:09 0% ` Ferruh Yigit
1 sibling, 1 reply; 200+ results
From: Jerin Jacob @ 2021-08-26 12:37 UTC (permalink / raw)
To: Konstantin Ananyev
Cc: dpdk-dev, Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko,
Qiming Yang, Qi Zhang, Beilei Xing, techboard
On Fri, Aug 20, 2021 at 9:59 PM Konstantin Ananyev
<konstantin.ananyev@intel.com> wrote:
>
> NOTE: This is just an RFC to start further discussion and collect the feedback.
> Due to significant amount of work, changes required are applied only to two
> PMDs so far: net/i40e and net/ice.
> So to build it you'll need to add:
> -Denable_drivers='common/*,mempool/*,net/ice,net/i40e'
> to your config options.
>
> That approach was selected to avoid(/minimize) possible performance losses.
>
> So far I done only limited amount functional and performance testing.
> Didn't spot any functional problems, and performance numbers
> remains the same before and after the patch on my box (testpmd, macswap fwd).
Based on testing on octeonxt2. We see some regression in testpmd and
bit on l3fwd too.
Without patch: 73.5mpps/core in testpmd iofwd
With out patch: 72 5mpps/core in testpmd iofwd
Based on my understanding it is due to additional indirection.
My suggestion to fix the problem by:
Removing the additional `data` redirection and pull callback function
pointers back
and keep rest as opaque as done in the existing patch like [1]
I don't believe this has any real implication on future ABI stability
as we will not be adding
any new item in rte_eth_fp in any way as new features can be added in slowpath
rte_eth_dev as mentioned in the patch.
[2] is the patch of doing the same as I don't see any performance
regression after [2].
[1]
- struct rte_eth_burst_api {
- struct rte_eth_fp {
+ void *data;
rte_eth_rx_burst_t rx_pkt_burst;
/**< PMD receive function. */
rte_eth_tx_burst_t tx_pkt_burst;
@@ -85,8 +100,19 @@ struct rte_eth_burst_api {
/**< Check the status of a Rx descriptor. */
rte_eth_tx_descriptor_status_t tx_descriptor_status;
/**< Check the status of a Tx descriptor. */
+ /**
+ * User-supplied functions called from rx_burst to post-process
+ * received packets before passing them to the user
+ */
+ struct rte_eth_rxtx_callback
+ *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+ /**
+ * User-supplied functions called from tx_burst to pre-process
+ * received packets before passing them to the driver for transmission.
+ */
+ struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
uintptr_t reserved[2];
-} __rte_cache_min_aligned;
+} __rte_cache_aligned;
[2]
https://pastebin.com/CuqkrCW4
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 01/15] ethdev: introduce shared Rx queue
@ 2021-08-26 11:58 4% ` Jerin Jacob
2021-08-28 14:16 0% ` Xueming(Steven) Li
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-08-26 11:58 UTC (permalink / raw)
To: Xueming(Steven) Li
Cc: dpdk-dev, Ferruh Yigit, NBU-Contact-Thomas Monjalon, Andrew Rybchenko
On Thu, Aug 19, 2021 at 5:39 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Thursday, August 19, 2021 1:27 PM
> > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> > Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> >
> > On Wed, Aug 18, 2021 at 4:44 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > Sent: Tuesday, August 17, 2021 11:12 PM
> > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > > > NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Andrew Rybchenko
> > > > <andrew.rybchenko@oktetlabs.ru>
> > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > > >
> > > > On Tue, Aug 17, 2021 at 5:01 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > > >
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > Sent: Tuesday, August 17, 2021 5:33 PM
> > > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > > > > >
> > > > > > On Wed, Aug 11, 2021 at 7:34 PM Xueming Li <xuemingl@nvidia.com> wrote:
> > > > > > >
> > > > > > > In current DPDK framework, each RX queue is pre-loaded with
> > > > > > > mbufs for incoming packets. When number of representors scale
> > > > > > > out in a switch domain, the memory consumption became
> > > > > > > significant. Most important, polling all ports leads to high
> > > > > > > cache miss, high latency and low throughput.
> > > > > > >
> > > > > > > This patch introduces shared RX queue. Ports with same
> > > > > > > configuration in a switch domain could share RX queue set by specifying sharing group.
> > > > > > > Polling any queue using same shared RX queue receives packets
> > > > > > > from all member ports. Source port is identified by mbuf->port.
> > > > > > >
> > > > > > > Port queue number in a shared group should be identical. Queue
> > > > > > > index is
> > > > > > > 1:1 mapped in shared group.
> > > > > > >
> > > > > > > Share RX queue must be polled on single thread or core.
> > > > > > >
> > > > > > > Multiple groups is supported by group ID.
> > > > > > >
> > > > > > > Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> > > > > > > Cc: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > ---
> > > > > > > Rx queue object could be used as shared Rx queue object, it's
> > > > > > > important to clear all queue control callback api that using queue object:
> > > > > > > https://mails.dpdk.org/archives/dev/2021-July/215574.html
> > > > > >
> > > > > > > #undef RTE_RX_OFFLOAD_BIT2STR diff --git
> > > > > > > a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> > > > > > > d2b27c351f..a578c9db9d 100644
> > > > > > > --- a/lib/ethdev/rte_ethdev.h
> > > > > > > +++ b/lib/ethdev/rte_ethdev.h
> > > > > > > @@ -1047,6 +1047,7 @@ struct rte_eth_rxconf {
> > > > > > > uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
> > > > > > > uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
> > > > > > > uint16_t rx_nseg; /**< Number of descriptions in rx_seg array.
> > > > > > > */
> > > > > > > + uint32_t shared_group; /**< Shared port group index in
> > > > > > > + switch domain. */
> > > > > >
> > > > > > Not to able to see anyone setting/creating this group ID test application.
> > > > > > How this group is created?
> > > > >
> > > > > Nice catch, the initial testpmd version only support one default group(0).
> > > > > All ports that supports shared-rxq assigned in same group.
> > > > >
> > > > > We should be able to change "--rxq-shared" to "--rxq-shared-group"
> > > > > to support group other than default.
> > > > >
> > > > > To support more groups simultaneously, need to consider testpmd
> > > > > forwarding stream core assignment, all streams in same group need to stay on same core.
> > > > > It's possible to specify how many ports to increase group number,
> > > > > but user must schedule stream affinity carefully - error prone.
> > > > >
> > > > > On the other hand, one group should be sufficient for most
> > > > > customer, the doubt is whether it valuable to support multiple groups test.
> > > >
> > > > Ack. One group is enough in testpmd.
> > > >
> > > > My question was more about who and how this group is created, Should
> > > > n't we need API to create shared_group? If we do the following, at least, I can think, how it can be implemented in SW or other HW.
> > > >
> > > > - Create aggregation queue group
> > > > - Attach multiple Rx queues to the aggregation queue group
> > > > - Pull the packets from the queue group(which internally fetch from
> > > > the Rx queues _attached_)
> > > >
> > > > Does the above kind of sequence, break your representor use case?
> > >
> > > Seems more like a set of EAL wrapper. Current API tries to minimize the application efforts to adapt shared-rxq.
> > > - step 1, not sure how important it is to create group with API, in rte_flow, group is created on demand.
> >
> > Which rte_flow pattern/action for this?
>
> No rte_flow for this, just recalled that the group in rte_flow is not created along with flow, not via api.
> I don’t see anything else to create along with group, just double whether it valuable to introduce a new api set to manage group.
See below.
>
> >
> > > - step 2, currently, the attaching is done in rte_eth_rx_queue_setup, specify offload and group in rx_conf struct.
> > > - step 3, define a dedicate api to receive packets from shared rxq? Looks clear to receive packets from shared rxq.
> > > currently, rxq objects in share group is same - the shared rxq, so the eth callback eth_rx_burst_t(rxq_obj, mbufs, n) could
> > > be used to receive packets from any ports in group, normally the first port(PF) in group.
> > > An alternative way is defining a vdev with same queue number and copy rxq objects will make the vdev a proxy of
> > > the shared rxq group - this could be an helper API.
> > >
> > > Anyway the wrapper doesn't break use case, step 3 api is more clear, need to understand how to implement efficiently.
> >
> > Are you doing this feature based on any HW support or it just pure SW thing, If it is SW, It is better to have just new vdev for like
> > drivers/net/bonding/. This we can help aggregate multiple Rxq across the multiple ports of same the driver.
>
> Based on HW support.
In Marvel HW, we do some support, I will outline here and some queries on this.
# We need to create some new HW structure for aggregation
# Connect each Rxq to the new HW structure for aggregation
# Use rx_burst from the new HW structure.
Could you outline your HW support?
Also, I am not able to understand how this will reduce the memory,
atleast in our HW need creating more memory now to deal this
as we need to deal new HW structure.
How is in your HW it reduces the memory? Also, if memory is the
constraint, why NOT reduce the number of queues.
# Also, I was thinking, one way to avoid the fast path or ABI change would like.
# Driver Initializes one more eth_dev_ops in driver as aggregator ethdev
# devargs of new ethdev or specific API like
drivers/net/bonding/rte_eth_bond.h can take the argument (port, queue)
tuples which needs to aggregate by new ethdev port
# No change in fastpath or ABI is required in this model.
> Most user might uses PF in group as the anchor port to rx burst, current definition should be easy for them to migrate.
> but some user might prefer grouping some hot plug/unpluggedrepresentors, EAL could provide wrappers, users could do
> that either due to the strategy not complex enough. Anyway, welcome any suggestion.
>
> >
> >
> > >
> > > >
> > > >
> > > > >
> > > > > >
> > > > > >
> > > > > > > /**
> > > > > > > * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
> > > > > > > * Only offloads set on rx_queue_offload_capa or
> > > > > > > rx_offload_capa @@ -1373,6 +1374,12 @@ struct rte_eth_conf {
> > > > > > > #define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM 0x00040000
> > > > > > > #define DEV_RX_OFFLOAD_RSS_HASH 0x00080000
> > > > > > > #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
> > > > > > > +/**
> > > > > > > + * Rx queue is shared among ports in same switch domain to
> > > > > > > +save memory,
> > > > > > > + * avoid polling each port. Any port in group can be used to receive packets.
> > > > > > > + * Real source port number saved in mbuf->port field.
> > > > > > > + */
> > > > > > > +#define RTE_ETH_RX_OFFLOAD_SHARED_RXQ 0x00200000
> > > > > > >
> > > > > > > #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
> > > > > > > DEV_RX_OFFLOAD_UDP_CKSUM | \
> > > > > > > --
> > > > > > > 2.25.1
> > > > > > >
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [EXT] [PATCH] doc: announce library refactor for ABI improvement
2021-08-26 10:46 4% ` [dpdk-dev] [EXT] " Akhil Goyal
2021-08-26 10:47 4% ` Jerin Jacob
@ 2021-08-26 11:04 4% ` Bruce Richardson
2021-08-26 15:44 4% ` Andrew Rybchenko
1 sibling, 1 reply; 200+ results
From: Bruce Richardson @ 2021-08-26 11:04 UTC (permalink / raw)
To: Akhil Goyal
Cc: Ferruh Yigit, Ray Kinsella, dev, Konstantin Ananyev,
Jerin Jacob Kollanukkaran
On Thu, Aug 26, 2021 at 10:46:35AM +0000, Akhil Goyal wrote:
> > Target is to reduce the public interface surface to improve the ABI
> > stability and this is preparation for the longer term stable ABI
> > support.
> >
> > Mainly device abstraction layer libraries are impacted because they have
> > two interfaces, one is public interface to the applications and other is
> > internal interface to the drivers. Some driver/internal interface
> > structures/symbols are in the public interface by mistake, this work is
> > to clean them.
> > Also some libraries has 'static inline' functions for performance
> > reasons (like ones in the ethdev), this work plans to split the
> > structures and hide the part that is not used by inline functions.
> >
> > The need of the work for the stable ABI already discussed and planned by
> > the DPDK technical board:
> > https://mails.dpdk.org/archives/dev/2021-July/214662.html
> >
> > Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> > ---
> Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [EXT] [PATCH] doc: announce library refactor for ABI improvement
2021-08-26 10:46 4% ` [dpdk-dev] [EXT] " Akhil Goyal
@ 2021-08-26 10:47 4% ` Jerin Jacob
2021-08-26 11:04 4% ` Bruce Richardson
1 sibling, 0 replies; 200+ results
From: Jerin Jacob @ 2021-08-26 10:47 UTC (permalink / raw)
To: Akhil Goyal
Cc: Ferruh Yigit, Ray Kinsella, dev, Konstantin Ananyev,
Jerin Jacob Kollanukkaran
On Thu, Aug 26, 2021 at 4:16 PM Akhil Goyal <gakhil@marvell.com> wrote:
>
> > Target is to reduce the public interface surface to improve the ABI
> > stability and this is preparation for the longer term stable ABI
> > support.
> >
> > Mainly device abstraction layer libraries are impacted because they have
> > two interfaces, one is public interface to the applications and other is
> > internal interface to the drivers. Some driver/internal interface
> > structures/symbols are in the public interface by mistake, this work is
> > to clean them.
> > Also some libraries has 'static inline' functions for performance
> > reasons (like ones in the ethdev), this work plans to split the
> > structures and hide the part that is not used by inline functions.
> >
> > The need of the work for the stable ABI already discussed and planned by
> > the DPDK technical board:
> > https://mails.dpdk.org/archives/dev/2021-July/214662.html
> >
> > Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> > ---
> Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [EXT] [PATCH] doc: announce library refactor for ABI improvement
2021-08-26 10:35 15% [dpdk-dev] [PATCH] doc: announce library refactor for ABI improvement Ferruh Yigit
@ 2021-08-26 10:46 4% ` Akhil Goyal
2021-08-26 10:47 4% ` Jerin Jacob
2021-08-26 11:04 4% ` Bruce Richardson
0 siblings, 2 replies; 200+ results
From: Akhil Goyal @ 2021-08-26 10:46 UTC (permalink / raw)
To: Ferruh Yigit, Ray Kinsella
Cc: dev, Konstantin Ananyev, Jerin Jacob Kollanukkaran
> Target is to reduce the public interface surface to improve the ABI
> stability and this is preparation for the longer term stable ABI
> support.
>
> Mainly device abstraction layer libraries are impacted because they have
> two interfaces, one is public interface to the applications and other is
> internal interface to the drivers. Some driver/internal interface
> structures/symbols are in the public interface by mistake, this work is
> to clean them.
> Also some libraries has 'static inline' functions for performance
> reasons (like ones in the ethdev), this work plans to split the
> structures and hide the part that is not used by inline functions.
>
> The need of the work for the stable ABI already discussed and planned by
> the DPDK technical board:
> https://mails.dpdk.org/archives/dev/2021-July/214662.html
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
Acked-by: Akhil Goyal <gakhil@marvell.com>
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH] doc: announce library refactor for ABI improvement
@ 2021-08-26 10:35 15% Ferruh Yigit
2021-08-26 10:46 4% ` [dpdk-dev] [EXT] " Akhil Goyal
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-08-26 10:35 UTC (permalink / raw)
To: Ray Kinsella
Cc: Ferruh Yigit, dev, Konstantin Ananyev, Jerin Jacob Kollanukkaran,
Akhil Goyal
Target is to reduce the public interface surface to improve the ABI
stability and this is preparation for the longer term stable ABI
support.
Mainly device abstraction layer libraries are impacted because they have
two interfaces, one is public interface to the applications and other is
internal interface to the drivers. Some driver/internal interface
structures/symbols are in the public interface by mistake, this work is
to clean them.
Also some libraries has 'static inline' functions for performance
reasons (like ones in the ethdev), this work plans to split the
structures and hide the part that is not used by inline functions.
The need of the work for the stable ABI already discussed and planned by
the DPDK technical board:
https://mails.dpdk.org/archives/dev/2021-July/214662.html
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
Cc: Konstantin Ananyev <konstantin.ananyev@intel.com>
Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
Cc: Akhil Goyal <gakhil@marvell.com>
CC: Ray Kinsella <mdr@ashroe.eu>
---
doc/guides/rel_notes/deprecation.rst | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b0b..1f02d9e14501 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -63,6 +63,11 @@ Deprecation Notices
us extending existing enum/define.
One solution can be using a fixed size array instead of ``.*MAX.*`` value.
+* lib: Will hide internal data structures and symbols from the public interfaces
+ as much as possible in v21.11.
+ This ABI break is done to improve the ABI stability in the long term and will
+ be done mainly, but not limited to, in device abstraction layer libraries.
+
* ethdev: Will add ``RTE_ETH_`` prefix to all ethdev macros/enums in v21.11.
Macros will be added for backward compatibility.
Backward compatibility macros will be removed on v22.11.
--
2.31.1
^ permalink raw reply [relevance 15%]
* Re: [dpdk-dev] [PATCH] doc: announce change in dma mapping/unmapping
2021-08-26 10:09 3% ` Bruce Richardson
@ 2021-08-26 10:14 0% ` Burakov, Anatoly
2021-08-31 13:42 0% ` Ding, Xuan
0 siblings, 1 reply; 200+ results
From: Burakov, Anatoly @ 2021-08-26 10:14 UTC (permalink / raw)
To: Bruce Richardson
Cc: Ferruh Yigit, Xuan Ding, dev, maxime.coquelin, chenbo.xia,
jiayu.hu, techboard, David Marchand
On 26-Aug-21 11:09 AM, Bruce Richardson wrote:
> On Thu, Aug 26, 2021 at 10:46:07AM +0100, Burakov, Anatoly wrote:
>> On 26-Aug-21 10:29 AM, Ferruh Yigit wrote:
>>> On 8/25/2021 12:47 PM, Burakov, Anatoly wrote:
>>>> On 25-Aug-21 12:27 PM, Xuan Ding wrote:
>>>>> Currently, the VFIO subsystem will compact adjacent DMA regions for the
>>>>> purposes of saving space in the internal list of mappings. This has a
>>>>> side effect of compacting two separate mappings that just happen to be
>>>>> adjacent in memory. Since VFIO implementation on IA platforms also does
>>>>> not allow partial unmapping of memory mapped for DMA, the current DPDK
>>>>> VFIO implementation will prevent unmapping of accidentally adjacent
>>>>> maps even though it could have been unmapped [1].
>>>>>
>>>>> The proper fix for this issue is to change the VFIO DMA mapping API to
>>>>> also include page size, and always map memory page-by-page.
>>>>>
>>>>> [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
>>>>>
>>>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
>>>>> ---
>>>>> doc/guides/rel_notes/deprecation.rst | 3 +++
>>>>> 1 file changed, 3 insertions(+)
>>>>>
>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>> index 76a4abfd6b..272ffa993e 100644
>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>> @@ -287,3 +287,6 @@ Deprecation Notices
>>>>> reserved bytes to 2 (from 3), and use 1 byte to indicate warnings and other
>>>>> information from the crypto/security operation. This field will be used to
>>>>> communicate events such as soft expiry with IPsec in lookaside mode.
>>>>> +
>>>>> + * vfio: the functions `rte_vfio_container_dma_map` and
>>>>> `rte_vfio_container_dma_unmap`
>>>>> + will be amended to include page size. This change is targeted for DPDK 21.11.
>>>>>
>>>>
>>>> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
>>>>
>>>
>>> Techboard decision was to add a new API, instead of updating existing ones, to
>>> not break the apps using this API.
>>>
>>> @Xuan, @Anatoly, can you please confirm if this will solve your problem?
>>>
>>
>> I don't think adding a new API is a particularly good solution. The "new"
>> API will be almost exactly as the old one, but adding one parameter. I don't
>> expect code duplication to be an issue, but having two API's that do the
>> same thing seems like it's rife for potential confusion.
>>
> Well, if one API is marked as deprecated, then there will be no confusion
> for users, since using the wrong one will give a warning pointing to the
> right one.
>
>> If we add a new API, we can then either remove the old API entirely in
>> 22.11 (effectively renaming it), or we remove the new API in 22.11 and
>> rename it back to the old function name. I don't think neither of these
>> is a good solution, as we risk introducing more users for the API that
>> will later change.
> The new API will not be renamed to the old one, since that would break apps
> using it without proper deprecation process. Removing the old one alone
> would be the approach to be used, but it would be correctly following the
> deprecation process and giving users at least 1 year, if no 2, of notice
> about the change.
>
>>
>> I think the pain of updating current software for 21.11 (while keeping
>> compatibility with 20.11 ABI!) is going to happen regardless, and whether we
>> decide to add a "temporary" new API or permanently rename the old one. It's
>> (in my opinion) easier to just bite the bullet and update the function in
>> 21.11.
> I fail to see the issue with adding a new function. Whether we add a new
> function or add a parameter to the existing one, code will have to change
> either way. The advantage of the former scheme, adding the new function, is
> that it shows that we are serious about our ABI/API compatibility process,
> and are not lax about passing exceptions when other options are available.
>
>>
>> However, if the tech board feels like adding a new API is a good solution,
>> then okay, but we need to flesh out roadmap a bit better. Do we rename the
>> old API, or do we add a temporary new API?
>
> New API added, old API deprecated. In future old API goes away leaving new
> API as the only option.
>
> /Bruce
>
Okay, so it's settled then. I revoke my ack for this patch, and we need
a new deprecation notice.
--
Thanks,
Anatoly
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v8] doc: add release milestones definition
@ 2021-08-26 10:11 5% ` Thomas Monjalon
2021-09-02 16:33 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-08-26 10:11 UTC (permalink / raw)
To: dev
Cc: david.marchand, ferruh.yigit, Asaf Penso, John McNamara,
Ajit Khaparde, Bruce Richardson
From: Asaf Penso <asafp@nvidia.com>
Adding more information about the release milestones.
This includes the scope of change, expectations, etc.
Signed-off-by: Asaf Penso <asafp@nvidia.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
v2: fix styling format and add content in the commit message
v3: change punctuation and avoid plural form when unneeded
v4: avoid abbreviations, "Priority" in -rc, and reword as John suggests
v5: note that release candidates may vary
v6: merge RFC and proposal deadline, add roadmap link and reduce duplication
v7: make expectations clearer and stricter
v8: add tests, more fixes, maintainers approval and new API rules
---
doc/guides/contributing/patches.rst | 85 +++++++++++++++++++++++++++--
1 file changed, 80 insertions(+), 5 deletions(-)
diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
index b9cc6e67ae..ef784d0f99 100644
--- a/doc/guides/contributing/patches.rst
+++ b/doc/guides/contributing/patches.rst
@@ -164,6 +164,10 @@ Make your planned changes in the cloned ``dpdk`` repo. Here are some guidelines
the :doc:`ABI policy <abi_policy>` and :ref:`ABI versioning <abi_versioning>`
guides. New external functions should also be added in alphabetical order.
+* Any new API function should be used in ``/app`` test directory.
+
+* When introducing a new device API, at least one driver should implement it.
+
* Important changes will require an addition to the release notes in ``doc/guides/rel_notes/``.
See the :ref:`Release Notes section of the Documentation Guidelines <doc_guidelines>` for details.
@@ -177,6 +181,8 @@ Make your planned changes in the cloned ``dpdk`` repo. Here are some guidelines
* Add documentation, if relevant, in the form of Doxygen comments or a User Guide in RST format.
See the :ref:`Documentation Guidelines <doc_guidelines>`.
+* Code and related documentation must be updated atomically in the same patch.
+
Once the changes have been made you should commit them to your local repo.
For small changes, that do not require specific explanations, it is better to keep things together in the
@@ -185,11 +191,6 @@ Larger changes that require different explanations should be separated into logi
A good way of thinking about whether a patch should be split is to consider whether the change could be
applied without dependencies as a backport.
-It is better to keep the related documentation changes in the same patch
-file as the code, rather than one big documentation patch at the end of a
-patchset. This makes it easier for future maintenance and development of the
-code.
-
As a guide to how patches should be structured run ``git log`` on similar files.
@@ -663,3 +664,77 @@ patch accepted. The general cycle for patch review and acceptance is:
than rework of the original.
* Trivial patches may be merged sooner than described above at the tree committer's
discretion.
+
+
+Milestones definition
+---------------------
+
+Each DPDK release has milestones that help everyone to converge to the release date.
+The following is a list of these milestones together with
+concrete definitions and expectations for a typical release cycle.
+An average cycle lasts 3 months and have 4 release candidates in the last month.
+Test reports are expected to be received after each release candidate.
+The number and expectations of release candidates might vary slightly.
+The schedule is updated in the `roadmap <https://core.dpdk.org/roadmap/#dates>`_.
+
+.. note::
+ Sooner is always better. Deadlines are not ideal dates.
+
+ Integration is never guaranteed but everyone can help.
+
+Roadmap
+~~~~~~~
+
+* Announce new features in libraries, drivers, applications, and examples.
+* To be published before the first day of the release cycle.
+
+Proposal Deadline
+~~~~~~~~~~~~~~~~~
+
+* Must send an RFC (Request For Comments) or a complete patch of new features.
+* Early RFC gives time for design review before complete implementation.
+* Should include at least the API changes in libraries and applications.
+* Library code should be quite complete at the deadline.
+* Nice to have: driver implementation, example code, and documentation.
+
+rc1
+~~~
+
+* Priority: libraries. No library feature should be accepted after -rc1.
+* API changes or additions must be implemented in libraries.
+* The API must include Doxygen documentation
+ and be part of the relevant .rst files (library-specific and release notes).
+* API should be used in a test application (``/app``).
+* At least one PMD should implement the API.
+ It may be a draft sent in a separate series.
+* The above should be sent to the mailing list at least 2 weeks before -rc1
+ to give time for review and maintainers approval.
+* If no review after 10 days, a reminder should be sent.
+* Nice to have: example code (``/examples``)
+
+rc2
+~~~
+
+* Priority: drivers. No driver feature should be accepted after -rc2.
+* A driver change must include documentation
+ in the relevant .rst files (driver-specific and release notes).
+* The above should be sent to the mailing list at least 2 weeks before -rc2.
+* Any issue found in -rc1 should be fixed.
+
+rc3
+~~~
+
+* Priority: applications. No application feature should be accepted after -rc3.
+* New functionality that does not depend on libraries update
+ can be integrated as part of -rc3.
+* The application change must include documentation in the relevant .rst files
+ (application-specific and release notes if significant).
+* Libraries and drivers cleanup are allowed.
+* Small driver reworks.
+* Critical and minor bug fixes.
+
+rc4
+~~~
+
+* Documentation updates.
+* Critical bug fixes.
--
2.31.1
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [PATCH] doc: announce change in dma mapping/unmapping
2021-08-26 9:46 3% ` Burakov, Anatoly
@ 2021-08-26 10:09 3% ` Bruce Richardson
2021-08-26 10:14 0% ` Burakov, Anatoly
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2021-08-26 10:09 UTC (permalink / raw)
To: Burakov, Anatoly
Cc: Ferruh Yigit, Xuan Ding, dev, maxime.coquelin, chenbo.xia,
jiayu.hu, techboard, David Marchand
On Thu, Aug 26, 2021 at 10:46:07AM +0100, Burakov, Anatoly wrote:
> On 26-Aug-21 10:29 AM, Ferruh Yigit wrote:
> > On 8/25/2021 12:47 PM, Burakov, Anatoly wrote:
> > > On 25-Aug-21 12:27 PM, Xuan Ding wrote:
> > > > Currently, the VFIO subsystem will compact adjacent DMA regions for the
> > > > purposes of saving space in the internal list of mappings. This has a
> > > > side effect of compacting two separate mappings that just happen to be
> > > > adjacent in memory. Since VFIO implementation on IA platforms also does
> > > > not allow partial unmapping of memory mapped for DMA, the current DPDK
> > > > VFIO implementation will prevent unmapping of accidentally adjacent
> > > > maps even though it could have been unmapped [1].
> > > >
> > > > The proper fix for this issue is to change the VFIO DMA mapping API to
> > > > also include page size, and always map memory page-by-page.
> > > >
> > > > [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
> > > >
> > > > Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> > > > ---
> > > > doc/guides/rel_notes/deprecation.rst | 3 +++
> > > > 1 file changed, 3 insertions(+)
> > > >
> > > > diff --git a/doc/guides/rel_notes/deprecation.rst
> > > > b/doc/guides/rel_notes/deprecation.rst
> > > > index 76a4abfd6b..272ffa993e 100644
> > > > --- a/doc/guides/rel_notes/deprecation.rst
> > > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > > @@ -287,3 +287,6 @@ Deprecation Notices
> > > > reserved bytes to 2 (from 3), and use 1 byte to indicate warnings and other
> > > > information from the crypto/security operation. This field will be used to
> > > > communicate events such as soft expiry with IPsec in lookaside mode.
> > > > +
> > > > + * vfio: the functions `rte_vfio_container_dma_map` and
> > > > `rte_vfio_container_dma_unmap`
> > > > + will be amended to include page size. This change is targeted for DPDK 21.11.
> > > >
> > >
> > > Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
> > >
> >
> > Techboard decision was to add a new API, instead of updating existing ones, to
> > not break the apps using this API.
> >
> > @Xuan, @Anatoly, can you please confirm if this will solve your problem?
> >
>
> I don't think adding a new API is a particularly good solution. The "new"
> API will be almost exactly as the old one, but adding one parameter. I don't
> expect code duplication to be an issue, but having two API's that do the
> same thing seems like it's rife for potential confusion.
>
Well, if one API is marked as deprecated, then there will be no confusion
for users, since using the wrong one will give a warning pointing to the
right one.
> If we add a new API, we can then either remove the old API entirely in
> 22.11 (effectively renaming it), or we remove the new API in 22.11 and
> rename it back to the old function name. I don't think neither of these
> is a good solution, as we risk introducing more users for the API that
> will later change.
The new API will not be renamed to the old one, since that would break apps
using it without proper deprecation process. Removing the old one alone
would be the approach to be used, but it would be correctly following the
deprecation process and giving users at least 1 year, if no 2, of notice
about the change.
>
> I think the pain of updating current software for 21.11 (while keeping
> compatibility with 20.11 ABI!) is going to happen regardless, and whether we
> decide to add a "temporary" new API or permanently rename the old one. It's
> (in my opinion) easier to just bite the bullet and update the function in
> 21.11.
I fail to see the issue with adding a new function. Whether we add a new
function or add a parameter to the existing one, code will have to change
either way. The advantage of the former scheme, adding the new function, is
that it shows that we are serious about our ABI/API compatibility process,
and are not lax about passing exceptions when other options are available.
>
> However, if the tech board feels like adding a new API is a good solution,
> then okay, but we need to flesh out roadmap a bit better. Do we rename the
> old API, or do we add a temporary new API?
New API added, old API deprecated. In future old API goes away leaving new
API as the only option.
/Bruce
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] doc: announce change in dma mapping/unmapping
@ 2021-08-26 9:46 3% ` Burakov, Anatoly
2021-08-26 10:09 3% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Burakov, Anatoly @ 2021-08-26 9:46 UTC (permalink / raw)
To: Ferruh Yigit, Xuan Ding, dev, maxime.coquelin, chenbo.xia
Cc: jiayu.hu, bruce.richardson, techboard, David Marchand
On 26-Aug-21 10:29 AM, Ferruh Yigit wrote:
> On 8/25/2021 12:47 PM, Burakov, Anatoly wrote:
>> On 25-Aug-21 12:27 PM, Xuan Ding wrote:
>>> Currently, the VFIO subsystem will compact adjacent DMA regions for the
>>> purposes of saving space in the internal list of mappings. This has a
>>> side effect of compacting two separate mappings that just happen to be
>>> adjacent in memory. Since VFIO implementation on IA platforms also does
>>> not allow partial unmapping of memory mapped for DMA, the current DPDK
>>> VFIO implementation will prevent unmapping of accidentally adjacent
>>> maps even though it could have been unmapped [1].
>>>
>>> The proper fix for this issue is to change the VFIO DMA mapping API to
>>> also include page size, and always map memory page-by-page.
>>>
>>> [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
>>>
>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
>>> ---
>>> doc/guides/rel_notes/deprecation.rst | 3 +++
>>> 1 file changed, 3 insertions(+)
>>>
>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>> b/doc/guides/rel_notes/deprecation.rst
>>> index 76a4abfd6b..272ffa993e 100644
>>> --- a/doc/guides/rel_notes/deprecation.rst
>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>> @@ -287,3 +287,6 @@ Deprecation Notices
>>> reserved bytes to 2 (from 3), and use 1 byte to indicate warnings and other
>>> information from the crypto/security operation. This field will be used to
>>> communicate events such as soft expiry with IPsec in lookaside mode.
>>> +
>>> + * vfio: the functions `rte_vfio_container_dma_map` and
>>> `rte_vfio_container_dma_unmap`
>>> + will be amended to include page size. This change is targeted for DPDK 21.11.
>>>
>>
>> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
>>
>
> Techboard decision was to add a new API, instead of updating existing ones, to
> not break the apps using this API.
>
> @Xuan, @Anatoly, can you please confirm if this will solve your problem?
>
I don't think adding a new API is a particularly good solution. The
"new" API will be almost exactly as the old one, but adding one
parameter. I don't expect code duplication to be an issue, but having
two API's that do the same thing seems like it's rife for potential
confusion.
If we add a new API, we can then either remove the old API entirely in
22.11 (effectively renaming it), or we remove the new API in 22.11 and
rename it back to the old function name. I don't think neither of these
is a good solution, as we risk introducing more users for the API that
will later change.
I think the pain of updating current software for 21.11 (while keeping
compatibility with 20.11 ABI!) is going to happen regardless, and
whether we decide to add a "temporary" new API or permanently rename the
old one. It's (in my opinion) easier to just bite the bullet and update
the function in 21.11.
However, if the tech board feels like adding a new API is a good
solution, then okay, but we need to flesh out roadmap a bit better. Do
we rename the old API, or do we add a temporary new API?
--
Thanks,
Anatoly
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v6 1/2] ethdev: add an API to get device configuration info
2021-08-25 20:07 3% ` Ferruh Yigit
@ 2021-08-26 6:00 0% ` Ajit Khaparde
0 siblings, 0 replies; 200+ results
From: Ajit Khaparde @ 2021-08-26 6:00 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Jie Wang, dpdk-dev, Xiaoyun Li, Andrew Rybchenko, Thomas Monjalon
[-- Attachment #1: Type: text/plain, Size: 4427 bytes --]
On Wed, Aug 25, 2021 at 1:08 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> On 8/24/2021 7:19 PM, Jie Wang wrote:
> > This patch adds a new API "rte_eth_dev_conf_info_get()" to help testpmd get
> > device configuration info.
> >
> > Signed-off-by: Jie Wang <jie1x.wang@intel.com>
> > ---
> > lib/ethdev/rte_ethdev.c | 27 +++++++++++++++++++++++++++
> > lib/ethdev/rte_ethdev.h | 26 ++++++++++++++++++++++++++
> > lib/ethdev/version.map | 3 +++
> > 3 files changed, 56 insertions(+)
> >
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > index 9d95cd11e1..74184099a1 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -3458,6 +3458,33 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
> > return 0;
> > }
> >
> > +int
> > +rte_eth_dev_conf_info_get(uint16_t port_id,
> > + struct rte_eth_dev_conf_info *dev_conf_info)
> > +{
> > + struct rte_eth_dev *dev;
> > +
> > + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > + dev = &rte_eth_devices[port_id];
> > +
> > + if (dev_conf_info == NULL) {
> > + RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u config info to NULL\n",
> > + port_id);
> > + return -EINVAL;
> > + }
> > +
> > + /*
> > + * Init dev_conf_info before port_id check since caller does not have
> > + * return status and does not know if get is successful or not.
> > + */
> > + memset(dev_conf_info, 0, sizeof(struct rte_eth_dev_conf_info));
> > +
> > + dev_conf_info->rx_offloads = dev->data->dev_conf.rxmode.offloads;
> > + dev_conf_info->tx_offloads = dev->data->dev_conf.txmode.offloads;
> > +
> > + return 0;
> > +}
> > +
> > int
> > rte_eth_dev_get_supported_ptypes(uint16_t port_id, uint32_t ptype_mask,
> > uint32_t *ptypes, int num)
> > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> > index d2b27c351f..70a2db550f 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -1587,6 +1587,15 @@ struct rte_eth_dev_info {
> > void *reserved_ptrs[2]; /**< Reserved for future fields */
> > };
> >
> > +/**
> > + * Ethernet device configuration information structure.
> > + * Used to retrieve information about configured device.
> > + */
> > +struct rte_eth_dev_conf_info {
> > + uint64_t rx_offloads; /**rxmode offloads */
> > + uint64_t tx_offloads; /**txmode offloads */
> > +};
>
> My concern is if we need to extend this struct later, when application wants to
> get more current config from the dpdk layer, it will cause ABI break and will
> need to wait next LTS.
>
> And as this struct grow, it will be kind of duplication of the 'struct
> rte_eth_conf'.
>
> What do you think to reuse 'struct rte_eth_conf' in this API, to cover future needs?
+1
rte_eth_conf gives all the information needed and more for future enhancements!
>
> > +
> > /**
> > * RX/TX queue states
> > */
> > @@ -3058,6 +3067,23 @@ int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
> > */
> > int rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info);
> >
> > +/**
> > + * Retrieve the contextual information of an Ethernet device.
> > + *
> > + * @param port_id
> > + * The port identifier of the Ethernet device.
> > + * @param dev_conf_info
> > + * A pointer to a structure of type *rte_eth_dev_conf_info* to be filled with
> > + * the contextual information of the Ethernet device.
> > + * @return
> > + * - (0) if successful.
> > + * - (-ENOTSUP) if support for dev_infos_get() does not exist for the device.
> > + * - (-ENODEV) if *port_id* invalid.
> > + * - (-EINVAL) if bad parameter.
> > + */
> > +int rte_eth_dev_conf_info_get(uint16_t port_id,
> > + struct rte_eth_dev_conf_info *dev_conf_info);
> > +
> > /**
> > * Retrieve the firmware version of a device.
> > *
> > diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> > index 44d30b05ae..40539f99f9 100644
> > --- a/lib/ethdev/version.map
> > +++ b/lib/ethdev/version.map
> > @@ -249,6 +249,9 @@ EXPERIMENTAL {
> > rte_mtr_meter_policy_delete;
> > rte_mtr_meter_policy_update;
> > rte_mtr_meter_policy_validate;
> > +
> > + # added in 21.11
> > + rte_eth_dev_conf_info_get;
> > };
> >
> > INTERNAL {
> >
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v6 1/2] ethdev: add an API to get device configuration info
@ 2021-08-25 20:07 3% ` Ferruh Yigit
2021-08-26 6:00 0% ` Ajit Khaparde
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-08-25 20:07 UTC (permalink / raw)
To: Jie Wang, dev; +Cc: xiaoyun.li, andrew.rybchenko, thomas
On 8/24/2021 7:19 PM, Jie Wang wrote:
> This patch adds a new API "rte_eth_dev_conf_info_get()" to help testpmd get
> device configuration info.
>
> Signed-off-by: Jie Wang <jie1x.wang@intel.com>
> ---
> lib/ethdev/rte_ethdev.c | 27 +++++++++++++++++++++++++++
> lib/ethdev/rte_ethdev.h | 26 ++++++++++++++++++++++++++
> lib/ethdev/version.map | 3 +++
> 3 files changed, 56 insertions(+)
>
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 9d95cd11e1..74184099a1 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -3458,6 +3458,33 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
> return 0;
> }
>
> +int
> +rte_eth_dev_conf_info_get(uint16_t port_id,
> + struct rte_eth_dev_conf_info *dev_conf_info)
> +{
> + struct rte_eth_dev *dev;
> +
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> + dev = &rte_eth_devices[port_id];
> +
> + if (dev_conf_info == NULL) {
> + RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u config info to NULL\n",
> + port_id);
> + return -EINVAL;
> + }
> +
> + /*
> + * Init dev_conf_info before port_id check since caller does not have
> + * return status and does not know if get is successful or not.
> + */
> + memset(dev_conf_info, 0, sizeof(struct rte_eth_dev_conf_info));
> +
> + dev_conf_info->rx_offloads = dev->data->dev_conf.rxmode.offloads;
> + dev_conf_info->tx_offloads = dev->data->dev_conf.txmode.offloads;
> +
> + return 0;
> +}
> +
> int
> rte_eth_dev_get_supported_ptypes(uint16_t port_id, uint32_t ptype_mask,
> uint32_t *ptypes, int num)
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index d2b27c351f..70a2db550f 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1587,6 +1587,15 @@ struct rte_eth_dev_info {
> void *reserved_ptrs[2]; /**< Reserved for future fields */
> };
>
> +/**
> + * Ethernet device configuration information structure.
> + * Used to retrieve information about configured device.
> + */
> +struct rte_eth_dev_conf_info {
> + uint64_t rx_offloads; /**rxmode offloads */
> + uint64_t tx_offloads; /**txmode offloads */
> +};
My concern is if we need to extend this struct later, when application wants to
get more current config from the dpdk layer, it will cause ABI break and will
need to wait next LTS.
And as this struct grow, it will be kind of duplication of the 'struct
rte_eth_conf'.
What do you think to reuse 'struct rte_eth_conf' in this API, to cover future needs?
> +
> /**
> * RX/TX queue states
> */
> @@ -3058,6 +3067,23 @@ int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
> */
> int rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info);
>
> +/**
> + * Retrieve the contextual information of an Ethernet device.
> + *
> + * @param port_id
> + * The port identifier of the Ethernet device.
> + * @param dev_conf_info
> + * A pointer to a structure of type *rte_eth_dev_conf_info* to be filled with
> + * the contextual information of the Ethernet device.
> + * @return
> + * - (0) if successful.
> + * - (-ENOTSUP) if support for dev_infos_get() does not exist for the device.
> + * - (-ENODEV) if *port_id* invalid.
> + * - (-EINVAL) if bad parameter.
> + */
> +int rte_eth_dev_conf_info_get(uint16_t port_id,
> + struct rte_eth_dev_conf_info *dev_conf_info);
> +
> /**
> * Retrieve the firmware version of a device.
> *
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> index 44d30b05ae..40539f99f9 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -249,6 +249,9 @@ EXPERIMENTAL {
> rte_mtr_meter_policy_delete;
> rte_mtr_meter_policy_update;
> rte_mtr_meter_policy_validate;
> +
> + # added in 21.11
> + rte_eth_dev_conf_info_get;
> };
>
> INTERNAL {
>
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v9] eal: remove sys/queue.h from public headers
2021-08-23 13:03 1% ` [dpdk-dev] [PATCH v8] " William Tu
@ 2021-08-24 16:21 1% ` William Tu
0 siblings, 0 replies; 200+ results
From: William Tu @ 2021-08-24 16:21 UTC (permalink / raw)
To: dev; +Cc: Dmitry.Kozliuk, Nick Connolly
Currently there are some public headers that include 'sys/queue.h', which
is not POSIX, but usually provided by the Linux/BSD system library.
(Not in POSIX.1, POSIX.1-2001, or POSIX.1-2008. Present on the BSDs.)
The file is missing on Windows. During the Windows build, DPDK uses a
bundled copy, so building a DPDK library works fine. But when OVS or other
applications use DPDK as a library, because some DPDK public headers
include 'sys/queue.h', on Windows, it triggers an error due to no such
file.
One solution is to install the 'lib/eal/windows/include/sys/queue.h' into
Windows environment, such as [1]. However, this means DPDK exports the
functionalities of 'sys/queue.h' into the environment, which might cause
symbols, macros, headers clashing with other applications.
The patch fixes it by removing the "#include <sys/queue.h>" from
DPDK public headers, so programs including DPDK headers don't depend
on the system to provide 'sys/queue.h'. When these public headers use
macros such as TAILQ_xxx, we replace it by the ones with RTE_ prefix.
For Windows, we copy the definitions from <sys/queue.h> to rte_os.h
in Windows EAL. Note that these RTE_ macros are compatible with
<sys/queue.h>, both at the level of API (to use with <sys/queue.h>
macros in C files) and ABI (to avoid breaking it).
Additionally, the TAILQ_FOREACH_SAFE is not part of <sys/queue.h>,
the patch replaces it with RTE_TAILQ_FOREACH_SAFE.
[1] http://mails.dpdk.org/archives/dev/2021-August/216304.html
Suggested-by: Nick Connolly <nick.connolly@mayadata.io>
Suggested-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
Acked-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
Signed-off-by: William Tu <u9012063@gmail.com>
---
v8-v9:
* Acked by Dmitry Kozliuk
* remove #ifdef at RTE_TAILQ_FOREACH_SAFE
* remove period at title
v7-v8:
* remove duplicate RTE_TAILQ_FOREACH_SAFE at rte_os.h
put the macro at rte_tailq.h
* remove inline comments
* diff
https://github.com/williamtu/dpdk/compare/a4144ff11b..6cb7cd8daf
v6-v7:
* remove some redundant "#incldue <sys/queue.h>"
* remove extra newline, add comment at rte_os.h for windows
use of bundled sys/queue
v5-v6:
* fix tab/indent issue, fix type and spelling
* fix duplicate RTE_TAILQ_FOREACH_SAFE
* fix build error due to drivers/net/mlx5/mlx5_flow_meter.c
---
drivers/bus/auxiliary/private.h | 1 +
drivers/bus/auxiliary/rte_bus_auxiliary.h | 5 ++--
drivers/bus/dpaa/dpaa_bus.c | 4 ++--
drivers/bus/fslmc/fslmc_bus.c | 4 ++--
drivers/bus/fslmc/fslmc_vfio.c | 9 +++++---
drivers/bus/ifpga/rte_bus_ifpga.h | 8 +++----
drivers/bus/pci/pci_params.c | 2 ++
drivers/bus/pci/rte_bus_pci.h | 13 +++++------
drivers/bus/pci/windows/pci.c | 3 +++
drivers/bus/pci/windows/pci_netuio.c | 2 ++
drivers/bus/vdev/rte_bus_vdev.h | 7 +++---
drivers/bus/vdev/vdev.c | 3 ++-
drivers/bus/vmbus/rte_bus_vmbus.h | 13 +++++------
drivers/net/bnxt/tf_ulp/bnxt_ulp.c | 2 +-
drivers/net/bonding/rte_eth_bond_flow.c | 2 +-
drivers/net/failsafe/failsafe_flow.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 9 ++++----
drivers/net/i40e/i40e_ethdev.h | 1 +
drivers/net/i40e/i40e_flow.c | 6 ++---
drivers/net/i40e/i40e_hash.c | 2 +-
drivers/net/i40e/rte_pmd_i40e.c | 6 ++---
drivers/net/iavf/iavf_generic_flow.c | 14 +++++------
drivers/net/ice/ice_dcf_ethdev.c | 1 +
drivers/net/ice/ice_ethdev.c | 4 ++--
drivers/net/ice/ice_generic_flow.c | 14 +++++------
drivers/net/ipn3ke/ipn3ke_flow.c | 2 +-
drivers/net/mlx5/mlx5_flow_dv.c | 2 +-
drivers/net/mlx5/mlx5_flow_meter.c | 2 +-
drivers/net/softnic/rte_eth_softnic_flow.c | 3 ++-
drivers/net/softnic/rte_eth_softnic_swq.c | 2 +-
drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 2 +-
lib/bbdev/rte_bbdev.h | 2 +-
lib/cryptodev/rte_cryptodev.h | 2 +-
lib/cryptodev/rte_cryptodev_pmd.h | 2 +-
lib/eal/common/eal_common_devargs.c | 4 ++--
lib/eal/common/eal_common_log.c | 1 +
lib/eal/common/eal_common_options.c | 2 +-
lib/eal/common/eal_private.h | 1 +
lib/eal/freebsd/include/rte_os.h | 10 ++++++++
lib/eal/include/rte_bus.h | 5 ++--
lib/eal/include/rte_class.h | 6 ++---
lib/eal/include/rte_dev.h | 5 ++--
lib/eal/include/rte_devargs.h | 3 +--
lib/eal/include/rte_log.h | 1 -
lib/eal/include/rte_service.h | 1 -
lib/eal/include/rte_tailq.h | 15 +++++-------
lib/eal/linux/include/rte_os.h | 10 ++++++++
lib/eal/windows/eal_alarm.c | 1 +
lib/eal/windows/include/rte_os.h | 27 ++++++++++++++++++++++
lib/efd/rte_efd.c | 2 +-
lib/ethdev/rte_ethdev_core.h | 2 +-
lib/hash/rte_fbk_hash.h | 1 -
lib/hash/rte_thash.c | 2 ++
lib/ip_frag/rte_ip_frag.h | 4 ++--
lib/mempool/rte_mempool.c | 2 +-
lib/mempool/rte_mempool.h | 9 ++++----
lib/pci/rte_pci.h | 1 -
lib/ring/rte_ring_core.h | 1 -
lib/table/rte_swx_table.h | 7 +++---
lib/table/rte_swx_table_selector.h | 5 ++--
lib/vhost/iotlb.c | 11 +++++----
lib/vhost/rte_vdpa_dev.h | 2 +-
lib/vhost/vdpa.c | 2 +-
63 files changed, 175 insertions(+), 124 deletions(-)
diff --git a/drivers/bus/auxiliary/private.h b/drivers/bus/auxiliary/private.h
index 9987e8b501..d22e83cf7a 100644
--- a/drivers/bus/auxiliary/private.h
+++ b/drivers/bus/auxiliary/private.h
@@ -7,6 +7,7 @@
#include <stdbool.h>
#include <stdio.h>
+#include <sys/queue.h>
#include "rte_bus_auxiliary.h"
diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h
index 2462bad2ba..b1f5610404 100644
--- a/drivers/bus/auxiliary/rte_bus_auxiliary.h
+++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h
@@ -19,7 +19,6 @@ extern "C" {
#include <stdlib.h>
#include <limits.h>
#include <errno.h>
-#include <sys/queue.h>
#include <stdint.h>
#include <inttypes.h>
@@ -113,7 +112,7 @@ typedef int (rte_auxiliary_dma_unmap_t)(struct rte_auxiliary_device *dev,
* A structure describing an auxiliary device.
*/
struct rte_auxiliary_device {
- TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
+ RTE_TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
struct rte_device device; /**< Inherit core device */
char name[RTE_DEV_NAME_MAX_LEN + 1]; /**< ASCII device name */
struct rte_intr_handle intr_handle; /**< Interrupt handle */
@@ -124,7 +123,7 @@ struct rte_auxiliary_device {
* A structure describing an auxiliary driver.
*/
struct rte_auxiliary_driver {
- TAILQ_ENTRY(rte_auxiliary_driver) next; /**< Next in list. */
+ RTE_TAILQ_ENTRY(rte_auxiliary_driver) next; /**< Next in list. */
struct rte_driver driver; /**< Inherit core driver. */
struct rte_auxiliary_bus *bus; /**< Auxiliary bus reference. */
rte_auxiliary_match_t *match; /**< Device match function. */
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index e499305d85..6cab2ae760 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -105,7 +105,7 @@ dpaa_add_to_device_list(struct rte_dpaa_device *newdev)
struct rte_dpaa_device *dev = NULL;
struct rte_dpaa_device *tdev = NULL;
- TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+ RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
comp = compare_dpaa_devices(newdev, dev);
if (comp < 0) {
TAILQ_INSERT_BEFORE(dev, newdev, next);
@@ -245,7 +245,7 @@ dpaa_clean_device_list(void)
struct rte_dpaa_device *dev = NULL;
struct rte_dpaa_device *tdev = NULL;
- TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+ RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
free(dev);
dev = NULL;
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index becc455f6b..8c8f8a298d 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -45,7 +45,7 @@ cleanup_fslmc_device_list(void)
struct rte_dpaa2_device *dev;
struct rte_dpaa2_device *t_dev;
- TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
+ RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
free(dev);
dev = NULL;
@@ -82,7 +82,7 @@ insert_in_device_list(struct rte_dpaa2_device *newdev)
struct rte_dpaa2_device *dev = NULL;
struct rte_dpaa2_device *tdev = NULL;
- TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, tdev) {
+ RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, tdev) {
comp = compare_dpaa2_devname(newdev, dev);
if (comp < 0) {
TAILQ_INSERT_BEFORE(dev, newdev, next);
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index c8373e627a..852fcfc4dd 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -808,7 +808,8 @@ fslmc_vfio_process_group(void)
bool is_dpmcp_in_blocklist = false, is_dpio_in_blocklist = false;
int dpmcp_count = 0, dpio_count = 0, current_device;
- TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+ RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+ dev_temp) {
if (dev->dev_type == DPAA2_MPORTAL) {
dpmcp_count++;
if (dev->device.devargs &&
@@ -825,7 +826,8 @@ fslmc_vfio_process_group(void)
/* Search the MCP as that should be initialized first. */
current_device = 0;
- TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+ RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+ dev_temp) {
if (dev->dev_type == DPAA2_MPORTAL) {
current_device++;
if (dev->device.devargs &&
@@ -872,7 +874,8 @@ fslmc_vfio_process_group(void)
}
current_device = 0;
- TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+ RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+ dev_temp) {
if (dev->dev_type == DPAA2_IO)
current_device++;
if (dev->device.devargs &&
diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h
index b43084155a..a85e90d384 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga.h
+++ b/drivers/bus/ifpga/rte_bus_ifpga.h
@@ -28,9 +28,9 @@ struct rte_afu_device;
struct rte_afu_driver;
/** Double linked list of Intel FPGA AFU device. */
-TAILQ_HEAD(ifpga_afu_dev_list, rte_afu_device);
+RTE_TAILQ_HEAD(ifpga_afu_dev_list, rte_afu_device);
/** Double linked list of Intel FPGA AFU device drivers. */
-TAILQ_HEAD(ifpga_afu_drv_list, rte_afu_driver);
+RTE_TAILQ_HEAD(ifpga_afu_drv_list, rte_afu_driver);
#define IFPGA_BUS_BITSTREAM_PATH_MAX_LEN 256
@@ -71,7 +71,7 @@ struct rte_afu_shared {
* A structure describing a AFU device.
*/
struct rte_afu_device {
- TAILQ_ENTRY(rte_afu_device) next; /**< Next in device list. */
+ RTE_TAILQ_ENTRY(rte_afu_device) next; /**< Next in device list. */
struct rte_device device; /**< Inherit core device */
struct rte_rawdev *rawdev; /**< Point Rawdev */
struct rte_afu_id id; /**< AFU id within FPGA. */
@@ -105,7 +105,7 @@ typedef int (afu_remove_t)(struct rte_afu_device *);
* A structure describing a AFU device.
*/
struct rte_afu_driver {
- TAILQ_ENTRY(rte_afu_driver) next; /**< Next afu driver. */
+ RTE_TAILQ_ENTRY(rte_afu_driver) next; /**< Next afu driver. */
struct rte_driver driver; /**< Inherit core driver. */
afu_probe_t *probe; /**< Device Probe function. */
afu_remove_t *remove; /**< Device Remove function. */
diff --git a/drivers/bus/pci/pci_params.c b/drivers/bus/pci/pci_params.c
index 3192e9c967..717388753d 100644
--- a/drivers/bus/pci/pci_params.c
+++ b/drivers/bus/pci/pci_params.c
@@ -2,6 +2,8 @@
* Copyright 2018 Gaëtan Rivet
*/
+#include <sys/queue.h>
+
#include <rte_bus.h>
#include <rte_bus_pci.h>
#include <rte_dev.h>
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 583470e831..673a2850c1 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -19,7 +19,6 @@ extern "C" {
#include <stdlib.h>
#include <limits.h>
#include <errno.h>
-#include <sys/queue.h>
#include <stdint.h>
#include <inttypes.h>
@@ -37,16 +36,16 @@ struct rte_pci_device;
struct rte_pci_driver;
/** List of PCI devices */
-TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
+RTE_TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
/** List of PCI drivers */
-TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
+RTE_TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
/* PCI Bus iterators */
#define FOREACH_DEVICE_ON_PCIBUS(p) \
- TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
+ RTE_TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
#define FOREACH_DRIVER_ON_PCIBUS(p) \
- TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
+ RTE_TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
struct rte_devargs;
@@ -64,7 +63,7 @@ enum rte_pci_kernel_driver {
* A structure describing a PCI device.
*/
struct rte_pci_device {
- TAILQ_ENTRY(rte_pci_device) next; /**< Next probed PCI device. */
+ RTE_TAILQ_ENTRY(rte_pci_device) next; /**< Next probed PCI device. */
struct rte_device device; /**< Inherit core device */
struct rte_pci_addr addr; /**< PCI location. */
struct rte_pci_id id; /**< PCI ID. */
@@ -160,7 +159,7 @@ typedef int (pci_dma_unmap_t)(struct rte_pci_device *dev, void *addr,
* A structure describing a PCI driver.
*/
struct rte_pci_driver {
- TAILQ_ENTRY(rte_pci_driver) next; /**< Next in list. */
+ RTE_TAILQ_ENTRY(rte_pci_driver) next; /**< Next in list. */
struct rte_driver driver; /**< Inherit core driver. */
struct rte_pci_bus *bus; /**< PCI bus reference. */
rte_pci_probe_t *probe; /**< Device probe function. */
diff --git a/drivers/bus/pci/windows/pci.c b/drivers/bus/pci/windows/pci.c
index d39a7748b8..d7bd5d6e80 100644
--- a/drivers/bus/pci/windows/pci.c
+++ b/drivers/bus/pci/windows/pci.c
@@ -1,6 +1,9 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright 2020 Mellanox Technologies, Ltd
*/
+
+#include <sys/queue.h>
+
#include <rte_windows.h>
#include <rte_errno.h>
#include <rte_log.h>
diff --git a/drivers/bus/pci/windows/pci_netuio.c b/drivers/bus/pci/windows/pci_netuio.c
index 1bf9133f71..a0b175a8fc 100644
--- a/drivers/bus/pci/windows/pci_netuio.c
+++ b/drivers/bus/pci/windows/pci_netuio.c
@@ -2,6 +2,8 @@
* Copyright(c) 2020 Intel Corporation.
*/
+#include <sys/queue.h>
+
#include <rte_windows.h>
#include <rte_errno.h>
#include <rte_log.h>
diff --git a/drivers/bus/vdev/rte_bus_vdev.h b/drivers/bus/vdev/rte_bus_vdev.h
index fc315d10fa..2856799953 100644
--- a/drivers/bus/vdev/rte_bus_vdev.h
+++ b/drivers/bus/vdev/rte_bus_vdev.h
@@ -15,12 +15,11 @@
extern "C" {
#endif
-#include <sys/queue.h>
#include <rte_dev.h>
#include <rte_devargs.h>
struct rte_vdev_device {
- TAILQ_ENTRY(rte_vdev_device) next; /**< Next attached vdev */
+ RTE_TAILQ_ENTRY(rte_vdev_device) next; /**< Next attached vdev */
struct rte_device device; /**< Inherit core device */
};
@@ -53,7 +52,7 @@ rte_vdev_device_args(const struct rte_vdev_device *dev)
}
/** Double linked list of virtual device drivers. */
-TAILQ_HEAD(vdev_driver_list, rte_vdev_driver);
+RTE_TAILQ_HEAD(vdev_driver_list, rte_vdev_driver);
/**
* Probe function called for each virtual device driver once.
@@ -107,7 +106,7 @@ typedef int (rte_vdev_dma_unmap_t)(struct rte_vdev_device *dev, void *addr,
* A virtual device driver abstraction.
*/
struct rte_vdev_driver {
- TAILQ_ENTRY(rte_vdev_driver) next; /**< Next in list. */
+ RTE_TAILQ_ENTRY(rte_vdev_driver) next; /**< Next in list. */
struct rte_driver driver; /**< Inherited general driver. */
rte_vdev_probe_t *probe; /**< Virtual device probe function. */
rte_vdev_remove_t *remove; /**< Virtual device remove function. */
diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
index 281a2c34e8..a8d8b2327e 100644
--- a/drivers/bus/vdev/vdev.c
+++ b/drivers/bus/vdev/vdev.c
@@ -100,7 +100,8 @@ rte_vdev_remove_custom_scan(rte_vdev_scan_callback callback, void *user_arg)
struct vdev_custom_scan *custom_scan, *tmp_scan;
rte_spinlock_lock(&vdev_custom_scan_lock);
- TAILQ_FOREACH_SAFE(custom_scan, &vdev_custom_scans, next, tmp_scan) {
+ RTE_TAILQ_FOREACH_SAFE(custom_scan, &vdev_custom_scans, next,
+ tmp_scan) {
if (custom_scan->callback != callback ||
(custom_scan->user_arg != (void *)-1 &&
custom_scan->user_arg != user_arg))
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 4cf73ce815..6bcff66468 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -20,7 +20,6 @@ extern "C" {
#include <limits.h>
#include <stdbool.h>
#include <errno.h>
-#include <sys/queue.h>
#include <stdint.h>
#include <inttypes.h>
@@ -38,15 +37,15 @@ struct rte_vmbus_bus;
struct vmbus_channel;
struct vmbus_mon_page;
-TAILQ_HEAD(rte_vmbus_device_list, rte_vmbus_device);
-TAILQ_HEAD(rte_vmbus_driver_list, rte_vmbus_driver);
+RTE_TAILQ_HEAD(rte_vmbus_device_list, rte_vmbus_device);
+RTE_TAILQ_HEAD(rte_vmbus_driver_list, rte_vmbus_driver);
/* VMBus iterators */
#define FOREACH_DEVICE_ON_VMBUS(p) \
- TAILQ_FOREACH(p, &(rte_vmbus_bus.device_list), next)
+ RTE_TAILQ_FOREACH(p, &(rte_vmbus_bus.device_list), next)
#define FOREACH_DRIVER_ON_VMBUS(p) \
- TAILQ_FOREACH(p, &(rte_vmbus_bus.driver_list), next)
+ RTE_TAILQ_FOREACH(p, &(rte_vmbus_bus.driver_list), next)
/** Maximum number of VMBUS resources. */
enum hv_uio_map {
@@ -62,7 +61,7 @@ enum hv_uio_map {
* A structure describing a VMBUS device.
*/
struct rte_vmbus_device {
- TAILQ_ENTRY(rte_vmbus_device) next; /**< Next probed VMBUS device */
+ RTE_TAILQ_ENTRY(rte_vmbus_device) next; /**< Next probed VMBUS device */
const struct rte_vmbus_driver *driver; /**< Associated driver */
struct rte_device device; /**< Inherit core device */
rte_uuid_t device_id; /**< VMBUS device id */
@@ -93,7 +92,7 @@ typedef int (vmbus_remove_t)(struct rte_vmbus_device *);
* A structure describing a VMBUS driver.
*/
struct rte_vmbus_driver {
- TAILQ_ENTRY(rte_vmbus_driver) next; /**< Next in list. */
+ RTE_TAILQ_ENTRY(rte_vmbus_driver) next; /**< Next in list. */
struct rte_driver driver;
struct rte_vmbus_bus *bus; /**< VM bus reference. */
vmbus_probe_t *probe; /**< Device Probe function. */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index dbf85e4eda..ac86b70caf 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -2018,7 +2018,7 @@ bnxt_ulp_cntxt_list_del(struct bnxt_ulp_context *ulp_ctx)
struct ulp_context_list_entry *entry, *temp;
rte_spinlock_lock(&bnxt_ulp_ctxt_lock);
- TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) {
+ RTE_TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) {
if (entry->ulp_ctx == ulp_ctx) {
TAILQ_REMOVE(&ulp_cntx_list, entry, next);
rte_free(entry);
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 417f76bf60..65b77faae7 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -157,7 +157,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
/* Destroy all bond flows from its slaves instead of flushing them to
* keep the LACP flow or any other external flows.
*/
- TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
+ RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
lret = bond_flow_destroy(dev, flow, err);
if (unlikely(lret != 0))
ret = lret;
diff --git a/drivers/net/failsafe/failsafe_flow.c b/drivers/net/failsafe/failsafe_flow.c
index 5e2b5f7c67..354f9fec20 100644
--- a/drivers/net/failsafe/failsafe_flow.c
+++ b/drivers/net/failsafe/failsafe_flow.c
@@ -180,7 +180,7 @@ fs_flow_flush(struct rte_eth_dev *dev,
return ret;
}
}
- TAILQ_FOREACH_SAFE(flow, &PRIV(dev)->flow_list, next, tmp) {
+ RTE_TAILQ_FOREACH_SAFE(flow, &PRIV(dev)->flow_list, next, tmp) {
TAILQ_REMOVE(&PRIV(dev)->flow_list, flow, next);
fs_flow_release(&flow);
}
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed1..6590363556 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -5436,7 +5436,7 @@ i40e_vsi_release(struct i40e_vsi *vsi)
/* VSI has child to attach, release child first */
if (vsi->veb) {
- TAILQ_FOREACH_SAFE(vsi_list, &vsi->veb->head, list, temp) {
+ RTE_TAILQ_FOREACH_SAFE(vsi_list, &vsi->veb->head, list, temp) {
if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS)
return -1;
}
@@ -5444,7 +5444,8 @@ i40e_vsi_release(struct i40e_vsi *vsi)
}
if (vsi->floating_veb) {
- TAILQ_FOREACH_SAFE(vsi_list, &vsi->floating_veb->head, list, temp) {
+ RTE_TAILQ_FOREACH_SAFE(vsi_list, &vsi->floating_veb->head,
+ list, temp) {
if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS)
return -1;
}
@@ -5452,7 +5453,7 @@ i40e_vsi_release(struct i40e_vsi *vsi)
/* Remove all macvlan filters of the VSI */
i40e_vsi_remove_all_macvlan_filter(vsi);
- TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
+ RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
rte_free(f);
if (vsi->type != I40E_VSI_MAIN &&
@@ -6055,7 +6056,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
i = 0;
/* Remove all existing mac */
- TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+ RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
mac_filter[i] = f->mac_info;
ret = i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr);
if (ret) {
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index cd6deabd60..374b73e4a7 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -6,6 +6,7 @@
#define _I40E_ETHDEV_H_
#include <stdint.h>
+#include <sys/queue.h>
#include <rte_time.h>
#include <rte_kvargs.h>
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 3c1570bd9c..e41a84f1d7 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -4917,7 +4917,7 @@ i40e_flow_flush_fdir_filter(struct i40e_pf *pf)
}
/* Delete FDIR flows in flow list. */
- TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
if (flow->filter_type == RTE_ETH_FILTER_FDIR) {
TAILQ_REMOVE(&pf->flow_list, flow, node);
}
@@ -4972,7 +4972,7 @@ i40e_flow_flush_ethertype_filter(struct i40e_pf *pf)
}
/* Delete ethertype flows in flow list. */
- TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
if (flow->filter_type == RTE_ETH_FILTER_ETHERTYPE) {
TAILQ_REMOVE(&pf->flow_list, flow, node);
rte_free(flow);
@@ -5000,7 +5000,7 @@ i40e_flow_flush_tunnel_filter(struct i40e_pf *pf)
}
/* Delete tunnel flows in flow list. */
- TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
if (flow->filter_type == RTE_ETH_FILTER_TUNNEL) {
TAILQ_REMOVE(&pf->flow_list, flow, node);
rte_free(flow);
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 1fb8c9abfc..6579b1a00b 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -1366,7 +1366,7 @@ i40e_hash_filter_flush(struct i40e_pf *pf)
{
struct rte_flow *flow, *next;
- TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, next) {
+ RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, next) {
if (flow->filter_type != RTE_ETH_FILTER_HASH)
continue;
diff --git a/drivers/net/i40e/rte_pmd_i40e.c b/drivers/net/i40e/rte_pmd_i40e.c
index 2e34140c5b..ec24046440 100644
--- a/drivers/net/i40e/rte_pmd_i40e.c
+++ b/drivers/net/i40e/rte_pmd_i40e.c
@@ -216,7 +216,7 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
void *temp;
/* remove all the MACs */
- TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+ RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
vlan_num = vsi->vlan_num;
filter_type = f->mac_info.filter_type;
if (filter_type == I40E_MACVLAN_PERFECT_MATCH ||
@@ -274,7 +274,7 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
void *temp;
/* restore all the MACs */
- TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+ RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
if (f->mac_info.filter_type == I40E_MACVLAN_PERFECT_MATCH ||
f->mac_info.filter_type == I40E_MACVLAN_HASH_MATCH) {
/**
@@ -563,7 +563,7 @@ rte_pmd_i40e_set_vf_mac_addr(uint16_t port, uint16_t vf_id,
rte_ether_addr_copy(mac_addr, &vf->mac_addr);
/* Remove all existing mac */
- TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
+ RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
if (i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr)
!= I40E_SUCCESS)
PMD_DRV_LOG(WARNING, "Delete MAC failed");
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index 1fe270fb22..b86d99e57d 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -1637,7 +1637,7 @@ iavf_flow_init(struct iavf_adapter *ad)
TAILQ_INIT(&vf->dist_parser_list);
rte_spinlock_init(&vf->flow_ops_lock);
- TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
if (engine->init == NULL) {
PMD_INIT_LOG(ERR, "Invalid engine type (%d)",
engine->type);
@@ -1663,7 +1663,7 @@ iavf_flow_uninit(struct iavf_adapter *ad)
struct iavf_flow_parser_node *p_parser;
void *temp;
- TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
if (engine->uninit)
engine->uninit(ad);
}
@@ -1733,7 +1733,7 @@ iavf_unregister_parser(struct iavf_flow_parser *parser,
if (list == NULL)
return;
- TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
if (p_parser->parser->engine->type == parser->engine->type) {
TAILQ_REMOVE(list, p_parser, node);
rte_free(p_parser);
@@ -1917,7 +1917,7 @@ iavf_parse_engine_create(struct iavf_adapter *ad,
void *temp;
void *meta = NULL;
- TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
if (parser_node->parser->parse_pattern_action(ad,
parser_node->parser->array,
parser_node->parser->array_len,
@@ -1946,7 +1946,7 @@ iavf_parse_engine_validate(struct iavf_adapter *ad,
void *temp;
void *meta = NULL;
- TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
if (parser_node->parser->parse_pattern_action(ad,
parser_node->parser->array,
parser_node->parser->array_len,
@@ -2089,7 +2089,7 @@ iavf_flow_is_valid(struct rte_flow *flow)
void *temp;
if (flow && flow->engine) {
- TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
if (engine == flow->engine)
return true;
}
@@ -2142,7 +2142,7 @@ iavf_flow_flush(struct rte_eth_dev *dev,
void *temp;
int ret = 0;
- TAILQ_FOREACH_SAFE(p_flow, &vf->flow_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(p_flow, &vf->flow_list, node, temp) {
ret = iavf_flow_destroy(dev, p_flow, error);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to flush flows");
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index cab7c4da87..629e88980d 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -4,6 +4,7 @@
#include <errno.h>
#include <stdbool.h>
+#include <sys/queue.h>
#include <sys/types.h>
#include <unistd.h>
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a4cd39c954..fadd5f2e5a 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1104,7 +1104,7 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
if (!vsi || !vsi->mac_num)
return -EINVAL;
- TAILQ_FOREACH_SAFE(m_f, &vsi->mac_list, next, temp) {
+ RTE_TAILQ_FOREACH_SAFE(m_f, &vsi->mac_list, next, temp) {
ret = ice_remove_mac_filter(vsi, &m_f->mac_info.mac_addr);
if (ret != ICE_SUCCESS) {
ret = -EINVAL;
@@ -1115,7 +1115,7 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
if (vsi->vlan_num == 0)
return 0;
- TAILQ_FOREACH_SAFE(v_f, &vsi->vlan_list, next, temp) {
+ RTE_TAILQ_FOREACH_SAFE(v_f, &vsi->vlan_list, next, temp) {
ret = ice_remove_vlan_filter(vsi, &v_f->vlan_info.vlan);
if (ret != ICE_SUCCESS) {
ret = -EINVAL;
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 66b5743abf..3e557efe0c 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1820,7 +1820,7 @@ ice_flow_init(struct ice_adapter *ad)
TAILQ_INIT(&pf->dist_parser_list);
rte_spinlock_init(&pf->flow_ops_lock);
- TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
if (engine->init == NULL) {
PMD_INIT_LOG(ERR, "Invalid engine type (%d)",
engine->type);
@@ -1846,7 +1846,7 @@ ice_flow_uninit(struct ice_adapter *ad)
struct ice_flow_parser_node *p_parser;
void *temp;
- TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
if (engine->uninit)
engine->uninit(ad);
}
@@ -1946,7 +1946,7 @@ ice_unregister_parser(struct ice_flow_parser *parser,
if (list == NULL)
return;
- TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
if (p_parser->parser->engine->type == parser->engine->type) {
TAILQ_REMOVE(list, p_parser, node);
rte_free(p_parser);
@@ -2272,7 +2272,7 @@ ice_parse_engine_create(struct ice_adapter *ad,
void *meta = NULL;
void *temp;
- TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
int ret;
if (parser_node->parser->parse_pattern_action(ad,
@@ -2305,7 +2305,7 @@ ice_parse_engine_validate(struct ice_adapter *ad,
struct ice_flow_parser_node *parser_node;
void *temp;
- TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
if (parser_node->parser->parse_pattern_action(ad,
parser_node->parser->array,
parser_node->parser->array_len,
@@ -2477,7 +2477,7 @@ ice_flow_flush(struct rte_eth_dev *dev,
void *temp;
int ret = 0;
- TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
ret = ice_flow_destroy(dev, p_flow, error);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to flush flows");
@@ -2541,7 +2541,7 @@ ice_flow_redirect(struct ice_adapter *ad,
rte_spinlock_lock(&pf->flow_ops_lock);
- TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
if (!p_flow->engine->redirect)
continue;
ret = p_flow->engine->redirect(ad, p_flow, rd);
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index c702e19ea5..f5867ca055 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -1231,7 +1231,7 @@ ipn3ke_flow_flush(struct rte_eth_dev *dev,
struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev);
struct rte_flow *flow, *temp;
- TAILQ_FOREACH_SAFE(flow, &hw->flow_list, next, temp) {
+ RTE_TAILQ_FOREACH_SAFE(flow, &hw->flow_list, next, temp) {
TAILQ_REMOVE(&hw->flow_list, flow, next);
rte_free(flow);
}
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 31d857030f..ba2bf4de37 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -15099,7 +15099,7 @@ __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev,
policy->act_cnt[i].fate_action == MLX5_FLOW_FATE_MTR)
next_fm = mlx5_flow_meter_find(priv,
policy->act_cnt[i].next_mtr_id, NULL);
- TAILQ_FOREACH_SAFE(color_rule, &sub_policy->color_rules[i],
+ RTE_TAILQ_FOREACH_SAFE(color_rule, &sub_policy->color_rules[i],
next_port, tmp) {
claim_zero(mlx5_flow_os_destroy_flow(color_rule->rule));
tbl = container_of(color_rule->matcher->tbl,
diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c
index a24bd9c7ae..ba4e9fca17 100644
--- a/drivers/net/mlx5/mlx5_flow_meter.c
+++ b/drivers/net/mlx5/mlx5_flow_meter.c
@@ -2168,7 +2168,7 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error)
priv->mtr_idx_tbl = NULL;
}
} else {
- TAILQ_FOREACH_SAFE(legacy_fm, fms, next, tmp) {
+ RTE_TAILQ_FOREACH_SAFE(legacy_fm, fms, next, tmp) {
fm = &legacy_fm->fm;
if (mlx5_flow_meter_params_flush(dev, fm, 0))
return -rte_mtr_error_set(error, EINVAL,
diff --git a/drivers/net/softnic/rte_eth_softnic_flow.c b/drivers/net/softnic/rte_eth_softnic_flow.c
index 27eaf380cd..7d054c38d2 100644
--- a/drivers/net/softnic/rte_eth_softnic_flow.c
+++ b/drivers/net/softnic/rte_eth_softnic_flow.c
@@ -2207,7 +2207,8 @@ pmd_flow_flush(struct rte_eth_dev *dev,
void *temp;
int status;
- TAILQ_FOREACH_SAFE(flow, &table->flows, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(flow, &table->flows, node,
+ temp) {
/* Rule delete. */
status = softnic_pipeline_table_rule_delete
(softnic,
diff --git a/drivers/net/softnic/rte_eth_softnic_swq.c b/drivers/net/softnic/rte_eth_softnic_swq.c
index 2083d0a976..afe6f05e29 100644
--- a/drivers/net/softnic/rte_eth_softnic_swq.c
+++ b/drivers/net/softnic/rte_eth_softnic_swq.c
@@ -39,7 +39,7 @@ softnic_softnic_swq_free_keep_rxq_txq(struct pmd_internals *p)
{
struct softnic_swq *swq, *tswq;
- TAILQ_FOREACH_SAFE(swq, &p->swq_list, node, tswq) {
+ RTE_TAILQ_FOREACH_SAFE(swq, &p->swq_list, node, tswq) {
if ((strncmp(swq->name, "RXQ", strlen("RXQ")) == 0) ||
(strncmp(swq->name, "TXQ", strlen("TXQ")) == 0))
continue;
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index c961e18d67..7b80370b36 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -1606,7 +1606,7 @@ remove_hw_queues_from_list(struct dpaa2_dpdmai_dev *dpdmai_dev)
DPAA2_QDMA_FUNC_TRACE();
- TAILQ_FOREACH_SAFE(queue, &qdma_queue_list, next, tqueue) {
+ RTE_TAILQ_FOREACH_SAFE(queue, &qdma_queue_list, next, tqueue) {
if (queue->dpdmai_dev == dpdmai_dev) {
TAILQ_REMOVE(&qdma_queue_list, queue, next);
rte_free(queue);
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 7017124414..3ebf62e697 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -434,7 +434,7 @@ struct rte_bbdev_callback;
struct rte_intr_handle;
/** Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
+RTE_TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
/**
* @internal The data structure associated with a device. Drivers can access
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 11f4e6fdbf..f86bf2260b 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -879,7 +879,7 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
struct rte_cryptodev_callback;
/** Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+RTE_TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
/**
* Structure used to hold information about the callbacks to be called for a
diff --git a/lib/cryptodev/rte_cryptodev_pmd.h b/lib/cryptodev/rte_cryptodev_pmd.h
index 1274436870..9542cbf263 100644
--- a/lib/cryptodev/rte_cryptodev_pmd.h
+++ b/lib/cryptodev/rte_cryptodev_pmd.h
@@ -66,7 +66,7 @@ struct rte_cryptodev_global {
/* Cryptodev driver, containing the driver ID */
struct cryptodev_driver {
- TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
+ RTE_TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
const struct rte_driver *driver;
uint8_t id;
};
diff --git a/lib/eal/common/eal_common_devargs.c b/lib/eal/common/eal_common_devargs.c
index 23aaf8b7e4..2e2f35c47e 100644
--- a/lib/eal/common/eal_common_devargs.c
+++ b/lib/eal/common/eal_common_devargs.c
@@ -291,7 +291,7 @@ rte_devargs_insert(struct rte_devargs **da)
if (*da == NULL || (*da)->bus == NULL)
return -1;
- TAILQ_FOREACH_SAFE(listed_da, &devargs_list, next, tmp) {
+ RTE_TAILQ_FOREACH_SAFE(listed_da, &devargs_list, next, tmp) {
if (listed_da == *da)
/* devargs already in the list */
return 0;
@@ -358,7 +358,7 @@ rte_devargs_remove(struct rte_devargs *devargs)
if (devargs == NULL || devargs->bus == NULL)
return -1;
- TAILQ_FOREACH_SAFE(d, &devargs_list, next, tmp) {
+ RTE_TAILQ_FOREACH_SAFE(d, &devargs_list, next, tmp) {
if (strcmp(d->bus->name, devargs->bus->name) == 0 &&
strcmp(d->name, devargs->name) == 0) {
TAILQ_REMOVE(&devargs_list, d, next);
diff --git a/lib/eal/common/eal_common_log.c b/lib/eal/common/eal_common_log.c
index ec8fe23a7f..1be35f5397 100644
--- a/lib/eal/common/eal_common_log.c
+++ b/lib/eal/common/eal_common_log.c
@@ -10,6 +10,7 @@
#include <errno.h>
#include <regex.h>
#include <fnmatch.h>
+#include <sys/queue.h>
#include <rte_eal.h>
#include <rte_log.h>
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index ff5861b5f3..24f5ceaab0 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -283,7 +283,7 @@ eal_option_device_parse(void)
void *tmp;
int ret = 0;
- TAILQ_FOREACH_SAFE(devopt, &devopt_list, next, tmp) {
+ RTE_TAILQ_FOREACH_SAFE(devopt, &devopt_list, next, tmp) {
if (ret == 0) {
ret = rte_devargs_add(devopt->type, devopt->arg);
if (ret)
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 64cf4e81c8..86dab1f057 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -8,6 +8,7 @@
#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>
+#include <sys/queue.h>
#include <rte_dev.h>
#include <rte_lcore.h>
diff --git a/lib/eal/freebsd/include/rte_os.h b/lib/eal/freebsd/include/rte_os.h
index 627f0483ab..9d8a69008c 100644
--- a/lib/eal/freebsd/include/rte_os.h
+++ b/lib/eal/freebsd/include/rte_os.h
@@ -11,6 +11,16 @@
*/
#include <pthread_np.h>
+#include <sys/queue.h>
+
+/* These macros are compatible with system's sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
+#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
+#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
+#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
+#define RTE_TAILQ_NEXT(elem, field) TAILQ_NEXT(elem, field)
+#define RTE_STAILQ_HEAD(name, type) STAILQ_HEAD(name, type)
+#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)
typedef cpuset_t rte_cpuset_t;
#define RTE_HAS_CPUSET
diff --git a/lib/eal/include/rte_bus.h b/lib/eal/include/rte_bus.h
index 80b154fb98..84d364df3f 100644
--- a/lib/eal/include/rte_bus.h
+++ b/lib/eal/include/rte_bus.h
@@ -19,13 +19,12 @@ extern "C" {
#endif
#include <stdio.h>
-#include <sys/queue.h>
#include <rte_log.h>
#include <rte_dev.h>
/** Double linked list of buses */
-TAILQ_HEAD(rte_bus_list, rte_bus);
+RTE_TAILQ_HEAD(rte_bus_list, rte_bus);
/**
@@ -250,7 +249,7 @@ typedef enum rte_iova_mode (*rte_bus_get_iommu_class_t)(void);
* A structure describing a generic bus.
*/
struct rte_bus {
- TAILQ_ENTRY(rte_bus) next; /**< Next bus object in linked list */
+ RTE_TAILQ_ENTRY(rte_bus) next; /**< Next bus object in linked list */
const char *name; /**< Name of the bus */
rte_bus_scan_t scan; /**< Scan for devices attached to bus */
rte_bus_probe_t probe; /**< Probe devices on bus */
diff --git a/lib/eal/include/rte_class.h b/lib/eal/include/rte_class.h
index 856d09b22d..d560339652 100644
--- a/lib/eal/include/rte_class.h
+++ b/lib/eal/include/rte_class.h
@@ -22,18 +22,16 @@
extern "C" {
#endif
-#include <sys/queue.h>
-
#include <rte_dev.h>
/** Double linked list of classes */
-TAILQ_HEAD(rte_class_list, rte_class);
+RTE_TAILQ_HEAD(rte_class_list, rte_class);
/**
* A structure describing a generic device class.
*/
struct rte_class {
- TAILQ_ENTRY(rte_class) next; /**< Next device class in linked list */
+ RTE_TAILQ_ENTRY(rte_class) next; /**< Next device class in linked list */
const char *name; /**< Name of the class */
rte_dev_iterate_t dev_iterate; /**< Device iterator. */
};
diff --git a/lib/eal/include/rte_dev.h b/lib/eal/include/rte_dev.h
index 6dd72c11a1..f6efe0c94e 100644
--- a/lib/eal/include/rte_dev.h
+++ b/lib/eal/include/rte_dev.h
@@ -18,7 +18,6 @@ extern "C" {
#endif
#include <stdio.h>
-#include <sys/queue.h>
#include <rte_config.h>
#include <rte_compat.h>
@@ -75,7 +74,7 @@ struct rte_mem_resource {
* A structure describing a device driver.
*/
struct rte_driver {
- TAILQ_ENTRY(rte_driver) next; /**< Next in list. */
+ RTE_TAILQ_ENTRY(rte_driver) next; /**< Next in list. */
const char *name; /**< Driver name. */
const char *alias; /**< Driver alias. */
};
@@ -90,7 +89,7 @@ struct rte_driver {
* A structure describing a generic device.
*/
struct rte_device {
- TAILQ_ENTRY(rte_device) next; /**< Next device */
+ RTE_TAILQ_ENTRY(rte_device) next; /**< Next device */
const char *name; /**< Device name */
const struct rte_driver *driver; /**< Driver assigned after probing */
const struct rte_bus *bus; /**< Bus handle assigned on scan */
diff --git a/lib/eal/include/rte_devargs.h b/lib/eal/include/rte_devargs.h
index cd90944fe8..957477b398 100644
--- a/lib/eal/include/rte_devargs.h
+++ b/lib/eal/include/rte_devargs.h
@@ -21,7 +21,6 @@ extern "C" {
#endif
#include <stdio.h>
-#include <sys/queue.h>
#include <rte_compat.h>
#include <rte_bus.h>
@@ -76,7 +75,7 @@ enum rte_devtype {
*/
struct rte_devargs {
/** Next in list. */
- TAILQ_ENTRY(rte_devargs) next;
+ RTE_TAILQ_ENTRY(rte_devargs) next;
/** Type of device. */
enum rte_devtype type;
/** Device policy. */
diff --git a/lib/eal/include/rte_log.h b/lib/eal/include/rte_log.h
index b706bb8710..bb3523467b 100644
--- a/lib/eal/include/rte_log.h
+++ b/lib/eal/include/rte_log.h
@@ -21,7 +21,6 @@ extern "C" {
#include <stdio.h>
#include <stdarg.h>
#include <stdbool.h>
-#include <sys/queue.h>
#include <rte_common.h>
#include <rte_config.h>
diff --git a/lib/eal/include/rte_service.h b/lib/eal/include/rte_service.h
index c7d037d862..1c9275c32a 100644
--- a/lib/eal/include/rte_service.h
+++ b/lib/eal/include/rte_service.h
@@ -29,7 +29,6 @@ extern "C" {
#include<stdio.h>
#include <stdint.h>
-#include <sys/queue.h>
#include <rte_config.h>
#include <rte_lcore.h>
diff --git a/lib/eal/include/rte_tailq.h b/lib/eal/include/rte_tailq.h
index b6fe4e5f78..0f67f9e4db 100644
--- a/lib/eal/include/rte_tailq.h
+++ b/lib/eal/include/rte_tailq.h
@@ -15,17 +15,16 @@
extern "C" {
#endif
-#include <sys/queue.h>
#include <stdio.h>
#include <rte_debug.h>
/** dummy structure type used by the rte_tailq APIs */
struct rte_tailq_entry {
- TAILQ_ENTRY(rte_tailq_entry) next; /**< Pointer entries for a tailq list */
+ RTE_TAILQ_ENTRY(rte_tailq_entry) next; /**< Pointer entries for a tailq list */
void *data; /**< Pointer to the data referenced by this tailq entry */
};
/** dummy */
-TAILQ_HEAD(rte_tailq_entry_head, rte_tailq_entry);
+RTE_TAILQ_HEAD(rte_tailq_entry_head, rte_tailq_entry);
#define RTE_TAILQ_NAMESIZE 32
@@ -48,7 +47,7 @@ struct rte_tailq_elem {
* rte_eal_tailqs_init()
*/
struct rte_tailq_head *head;
- TAILQ_ENTRY(rte_tailq_elem) next;
+ RTE_TAILQ_ENTRY(rte_tailq_elem) next;
const char name[RTE_TAILQ_NAMESIZE];
};
@@ -126,12 +125,10 @@ RTE_INIT(tailqinitfn_ ##t) \
}
/* This macro permits both remove and free var within the loop safely.*/
-#ifndef TAILQ_FOREACH_SAFE
-#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \
- for ((var) = TAILQ_FIRST((head)); \
- (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+#define RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+ for ((var) = RTE_TAILQ_FIRST((head)); \
+ (var) && ((tvar) = RTE_TAILQ_NEXT((var), field), 1); \
(var) = (tvar))
-#endif
#ifdef __cplusplus
}
diff --git a/lib/eal/linux/include/rte_os.h b/lib/eal/linux/include/rte_os.h
index 1618b4df22..35c07c70cb 100644
--- a/lib/eal/linux/include/rte_os.h
+++ b/lib/eal/linux/include/rte_os.h
@@ -11,6 +11,16 @@
*/
#include <sched.h>
+#include <sys/queue.h>
+
+/* These macros are compatible with system's sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
+#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
+#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
+#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
+#define RTE_TAILQ_NEXT(elem, field) TAILQ_NEXT(elem, field)
+#define RTE_STAILQ_HEAD(name, type) STAILQ_HEAD(name, type)
+#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)
#ifdef CPU_SETSIZE /* may require _GNU_SOURCE */
typedef cpu_set_t rte_cpuset_t;
diff --git a/lib/eal/windows/eal_alarm.c b/lib/eal/windows/eal_alarm.c
index e5dc54efb8..103c1f909d 100644
--- a/lib/eal/windows/eal_alarm.c
+++ b/lib/eal/windows/eal_alarm.c
@@ -4,6 +4,7 @@
#include <stdatomic.h>
#include <stdbool.h>
+#include <sys/queue.h>
#include <rte_alarm.h>
#include <rte_spinlock.h>
diff --git a/lib/eal/windows/include/rte_os.h b/lib/eal/windows/include/rte_os.h
index 66c711d458..a0a311495e 100644
--- a/lib/eal/windows/include/rte_os.h
+++ b/lib/eal/windows/include/rte_os.h
@@ -18,6 +18,33 @@
extern "C" {
#endif
+/* These macros are compatible with bundled sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) \
+struct name { \
+ struct type *tqh_first; \
+ struct type **tqh_last; \
+}
+#define RTE_TAILQ_ENTRY(type) \
+struct { \
+ struct type *tqe_next; \
+ struct type **tqe_prev; \
+}
+#define RTE_TAILQ_FOREACH(var, head, field) \
+ for ((var) = RTE_TAILQ_FIRST((head)); \
+ (var); \
+ (var) = RTE_TAILQ_NEXT((var), field))
+#define RTE_TAILQ_FIRST(head) ((head)->tqh_first)
+#define RTE_TAILQ_NEXT(elm, field) ((elm)->field.tqe_next)
+#define RTE_STAILQ_HEAD(name, type) \
+struct name { \
+ struct type *stqh_first; \
+ struct type **stqh_last; \
+}
+#define RTE_STAILQ_ENTRY(type) \
+struct { \
+ struct type *stqe_next; \
+}
+
/* cpu_set macros implementation */
#define RTE_CPU_AND(dst, src1, src2) CPU_AND(dst, src1, src2)
#define RTE_CPU_OR(dst, src1, src2) CPU_OR(dst, src1, src2)
diff --git a/lib/efd/rte_efd.c b/lib/efd/rte_efd.c
index 77f46809f8..5bf517fee9 100644
--- a/lib/efd/rte_efd.c
+++ b/lib/efd/rte_efd.c
@@ -759,7 +759,7 @@ rte_efd_free(struct rte_efd_table *table)
efd_list = RTE_TAILQ_CAST(rte_efd_tailq.head, rte_efd_list);
rte_mcfg_tailq_write_lock();
- TAILQ_FOREACH_SAFE(te, efd_list, next, temp) {
+ RTE_TAILQ_FOREACH_SAFE(te, efd_list, next, temp) {
if (te->data == (void *) table) {
TAILQ_REMOVE(efd_list, te, next);
rte_free(te);
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..d2c9ec42c7 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -21,7 +21,7 @@
struct rte_eth_dev_callback;
/** @internal Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
+RTE_TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
struct rte_eth_dev;
diff --git a/lib/hash/rte_fbk_hash.h b/lib/hash/rte_fbk_hash.h
index c4d6976d2b..9c3a61c1d6 100644
--- a/lib/hash/rte_fbk_hash.h
+++ b/lib/hash/rte_fbk_hash.h
@@ -17,7 +17,6 @@
#include <stdint.h>
#include <errno.h>
-#include <sys/queue.h>
#ifdef __cplusplus
extern "C" {
diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c
index d5a95a6e00..696a1121e2 100644
--- a/lib/hash/rte_thash.c
+++ b/lib/hash/rte_thash.c
@@ -2,6 +2,8 @@
* Copyright(c) 2021 Intel Corporation
*/
+#include <sys/queue.h>
+
#include <rte_thash.h>
#include <rte_tailq.h>
#include <rte_random.h>
diff --git a/lib/ip_frag/rte_ip_frag.h b/lib/ip_frag/rte_ip_frag.h
index 0bfe64b14e..80f931c32a 100644
--- a/lib/ip_frag/rte_ip_frag.h
+++ b/lib/ip_frag/rte_ip_frag.h
@@ -62,7 +62,7 @@ struct ip_frag_key {
* First two entries in the frags[] array are for the last and first fragments.
*/
struct ip_frag_pkt {
- TAILQ_ENTRY(ip_frag_pkt) lru; /**< LRU list */
+ RTE_TAILQ_ENTRY(ip_frag_pkt) lru; /**< LRU list */
struct ip_frag_key key; /**< fragmentation key */
uint64_t start; /**< creation timestamp */
uint32_t total_size; /**< expected reassembled size */
@@ -83,7 +83,7 @@ struct rte_ip_frag_death_row {
/**< mbufs to be freed */
};
-TAILQ_HEAD(ip_pkt_list, ip_frag_pkt); /**< @internal fragments tailq */
+RTE_TAILQ_HEAD(ip_pkt_list, ip_frag_pkt); /**< @internal fragments tailq */
/** fragmentation table statistics */
struct ip_frag_tbl_stat {
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 59a588425b..c5f859ae71 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -1337,7 +1337,7 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *),
rte_mcfg_mempool_read_lock();
- TAILQ_FOREACH_SAFE(te, mempool_list, next, tmp_te) {
+ RTE_TAILQ_FOREACH_SAFE(te, mempool_list, next, tmp_te) {
(*func)((struct rte_mempool *) te->data, arg);
}
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 4235d6f0bf..f57ecbd6fc 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -38,7 +38,6 @@
#include <stdint.h>
#include <errno.h>
#include <inttypes.h>
-#include <sys/queue.h>
#include <rte_config.h>
#include <rte_spinlock.h>
@@ -141,7 +140,7 @@ struct rte_mempool_objsz {
* double-frees.
*/
struct rte_mempool_objhdr {
- STAILQ_ENTRY(rte_mempool_objhdr) next; /**< Next in list. */
+ RTE_STAILQ_ENTRY(rte_mempool_objhdr) next; /**< Next in list. */
struct rte_mempool *mp; /**< The mempool owning the object. */
rte_iova_t iova; /**< IO address of the object. */
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
@@ -152,7 +151,7 @@ struct rte_mempool_objhdr {
/**
* A list of object headers type
*/
-STAILQ_HEAD(rte_mempool_objhdr_list, rte_mempool_objhdr);
+RTE_STAILQ_HEAD(rte_mempool_objhdr_list, rte_mempool_objhdr);
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
@@ -171,7 +170,7 @@ struct rte_mempool_objtlr {
/**
* A list of memory where objects are stored
*/
-STAILQ_HEAD(rte_mempool_memhdr_list, rte_mempool_memhdr);
+RTE_STAILQ_HEAD(rte_mempool_memhdr_list, rte_mempool_memhdr);
/**
* Callback used to free a memory chunk
@@ -186,7 +185,7 @@ typedef void (rte_mempool_memchunk_free_cb_t)(struct rte_mempool_memhdr *memhdr,
* and physically contiguous.
*/
struct rte_mempool_memhdr {
- STAILQ_ENTRY(rte_mempool_memhdr) next; /**< Next in list. */
+ RTE_STAILQ_ENTRY(rte_mempool_memhdr) next; /**< Next in list. */
struct rte_mempool *mp; /**< The mempool owning the chunk */
void *addr; /**< Virtual address of the chunk */
rte_iova_t iova; /**< IO address of the chunk */
diff --git a/lib/pci/rte_pci.h b/lib/pci/rte_pci.h
index 1f33d687f4..71cbd441c7 100644
--- a/lib/pci/rte_pci.h
+++ b/lib/pci/rte_pci.h
@@ -18,7 +18,6 @@ extern "C" {
#include <stdio.h>
#include <limits.h>
-#include <sys/queue.h>
#include <inttypes.h>
#include <sys/types.h>
diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h
index 16718ca7f1..43ce1a29d4 100644
--- a/lib/ring/rte_ring_core.h
+++ b/lib/ring/rte_ring_core.h
@@ -26,7 +26,6 @@ extern "C" {
#include <stdio.h>
#include <stdint.h>
#include <string.h>
-#include <sys/queue.h>
#include <errno.h>
#include <rte_common.h>
#include <rte_config.h>
diff --git a/lib/table/rte_swx_table.h b/lib/table/rte_swx_table.h
index e23f2304c6..f93e5f3f95 100644
--- a/lib/table/rte_swx_table.h
+++ b/lib/table/rte_swx_table.h
@@ -16,7 +16,8 @@ extern "C" {
*/
#include <stdint.h>
-#include <sys/queue.h>
+
+#include <rte_os.h>
/** Match type. */
enum rte_swx_table_match_type {
@@ -68,7 +69,7 @@ struct rte_swx_table_entry {
/** Used to facilitate the membership of this table entry to a
* linked list.
*/
- TAILQ_ENTRY(rte_swx_table_entry) node;
+ RTE_TAILQ_ENTRY(rte_swx_table_entry) node;
/** Key value for the current entry. Array of *key_size* bytes or NULL
* if the *key_size* for the current table is 0.
@@ -111,7 +112,7 @@ struct rte_swx_table_entry {
};
/** List of table entries. */
-TAILQ_HEAD(rte_swx_table_entry_list, rte_swx_table_entry);
+RTE_TAILQ_HEAD(rte_swx_table_entry_list, rte_swx_table_entry);
/**
* Table memory footprint get
diff --git a/lib/table/rte_swx_table_selector.h b/lib/table/rte_swx_table_selector.h
index 71b6a74810..62988d2856 100644
--- a/lib/table/rte_swx_table_selector.h
+++ b/lib/table/rte_swx_table_selector.h
@@ -16,7 +16,6 @@ extern "C" {
*/
#include <stdint.h>
-#include <sys/queue.h>
#include <rte_compat.h>
@@ -56,7 +55,7 @@ struct rte_swx_table_selector_params {
/** Group member parameters. */
struct rte_swx_table_selector_member {
/** Linked list connectivity. */
- TAILQ_ENTRY(rte_swx_table_selector_member) node;
+ RTE_TAILQ_ENTRY(rte_swx_table_selector_member) node;
/** Member ID. */
uint32_t member_id;
@@ -66,7 +65,7 @@ struct rte_swx_table_selector_member {
};
/** List of group members. */
-TAILQ_HEAD(rte_swx_table_selector_member_list, rte_swx_table_selector_member);
+RTE_TAILQ_HEAD(rte_swx_table_selector_member_list, rte_swx_table_selector_member);
/** Group parameters. */
struct rte_swx_table_selector_group {
diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
index e0b67721b6..e4a445e709 100644
--- a/lib/vhost/iotlb.c
+++ b/lib/vhost/iotlb.c
@@ -32,7 +32,7 @@ vhost_user_iotlb_pending_remove_all(struct vhost_virtqueue *vq)
rte_rwlock_write_lock(&vq->iotlb_pending_lock);
- TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
+ RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
TAILQ_REMOVE(&vq->iotlb_pending_list, node, next);
rte_mempool_put(vq->iotlb_pool, node);
}
@@ -100,7 +100,8 @@ vhost_user_iotlb_pending_remove(struct vhost_virtqueue *vq,
rte_rwlock_write_lock(&vq->iotlb_pending_lock);
- TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
+ RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next,
+ temp_node) {
if (node->iova < iova)
continue;
if (node->iova >= iova + size)
@@ -121,7 +122,7 @@ vhost_user_iotlb_cache_remove_all(struct vhost_virtqueue *vq)
rte_rwlock_write_lock(&vq->iotlb_lock);
- TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+ RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
TAILQ_REMOVE(&vq->iotlb_list, node, next);
rte_mempool_put(vq->iotlb_pool, node);
}
@@ -141,7 +142,7 @@ vhost_user_iotlb_cache_random_evict(struct vhost_virtqueue *vq)
entry_idx = rte_rand() % vq->iotlb_cache_nr;
- TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+ RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
if (!entry_idx) {
TAILQ_REMOVE(&vq->iotlb_list, node, next);
rte_mempool_put(vq->iotlb_pool, node);
@@ -218,7 +219,7 @@ vhost_user_iotlb_cache_remove(struct vhost_virtqueue *vq,
rte_rwlock_write_lock(&vq->iotlb_lock);
- TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+ RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
/* Sorted list */
if (unlikely(iova + size < node->iova))
break;
diff --git a/lib/vhost/rte_vdpa_dev.h b/lib/vhost/rte_vdpa_dev.h
index bfada387b0..b0f494815f 100644
--- a/lib/vhost/rte_vdpa_dev.h
+++ b/lib/vhost/rte_vdpa_dev.h
@@ -71,7 +71,7 @@ struct rte_vdpa_dev_ops {
* vdpa device structure includes device address and device operations.
*/
struct rte_vdpa_device {
- TAILQ_ENTRY(rte_vdpa_device) next;
+ RTE_TAILQ_ENTRY(rte_vdpa_device) next;
/** Generic device information */
struct rte_device *device;
/** vdpa device operations */
diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c
index 99a926a772..6dd91859ac 100644
--- a/lib/vhost/vdpa.c
+++ b/lib/vhost/vdpa.c
@@ -115,7 +115,7 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev)
int ret = -1;
rte_spinlock_lock(&vdpa_device_list_lock);
- TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) {
+ RTE_TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) {
if (dev != cur_dev)
continue;
--
2.30.2
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [RFC 11/15] eventdev: reserve fields in timer object
@ 2021-08-24 15:10 3% ` Stephen Hemminger
2021-09-01 6:48 0% ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-08-24 15:10 UTC (permalink / raw)
To: pbhagavatula; +Cc: jerinj, Erik Gabriel Carrillo, konstantin.ananyev, dev
On Tue, 24 Aug 2021 01:10:15 +0530
<pbhagavatula@marvell.com> wrote:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Reserve fields in rte_event_timer data structure to address future
> use cases.
> Also, remove volatile from rte_event_timer.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reserve fields are not a good idea. They don't solve future API/ABI problems.
The issue is that you need to zero them and check they are zero otherwise
they can't safely be used later. This happened with the Linux kernel
system calls where in several cases a flag field was added for future.
The problem is that old programs would work with any garbage in the flag
field, and therefore the flag could not be extended.
A better way to make structures internal opaque objects that
can be resized. Why is rte_event_timer_adapter exposed in API?
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [dpdk-ci] [PATCH] version: 21.11-rc0
2021-08-24 7:58 3% ` David Marchand
@ 2021-08-24 12:19 3% ` Lincoln Lavoie
0 siblings, 0 replies; 200+ results
From: Lincoln Lavoie @ 2021-08-24 12:19 UTC (permalink / raw)
To: David Marchand
Cc: dpdklab, Thomas Monjalon, ci, dev, Ray Kinsella, Aaron Conole,
Owen Hilyard
Hi David,
We'll check. Owen was working on a change to allow ABI to still run for
merges onto the 20.11 branch, so the LTS branches can remain stable, while
per-patch testing is off for the 21.11 release. Looks like that has a side
effect for how these patches were flagged in the system.
Cheers,
Lincoln
On Tue, Aug 24, 2021 at 3:58 AM David Marchand <david.marchand@redhat.com>
wrote:
> Hello Lincoln,
>
> On Tue, Aug 17, 2021 at 2:04 PM Lincoln Lavoie <lylavoie@iol.unh.edu>
> wrote:
> >
> > Hi David,
> >
> > ABI testing was disable / stopped on Friday in the Community CI lab.
> Patches from before that for 21.11 would have still had the test run and
> could have failures listed. I'm not sure if there is a way to "remove"
> those failure marks from patchworks. But, for all new patches since then,
> ABI hasn't been run.
>
> I can see new reports for ABI, please can someone from UNH double check?
> https://lab.dpdk.org/results/dashboard/patchsets/18301/
> https://lab.dpdk.org/results/dashboard/patchsets/18303/
> https://lab.dpdk.org/results/dashboard/patchsets/18325/
>
>
> --
> David Marchand
>
>
--
*Lincoln Lavoie*
Principal Engineer, Broadband Technologies
21 Madbury Rd., Ste. 100, Durham, NH 03824
lylavoie@iol.unh.edu
https://www.iol.unh.edu
+1-603-674-2755 (m)
<https://www.iol.unh.edu>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [dpdk-ci] [PATCH] version: 21.11-rc0
@ 2021-08-24 7:58 3% ` David Marchand
2021-08-24 12:19 3% ` Lincoln Lavoie
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-08-24 7:58 UTC (permalink / raw)
To: Lincoln Lavoie, dpdklab
Cc: Thomas Monjalon, ci, dev, Ray Kinsella, Aaron Conole
Hello Lincoln,
On Tue, Aug 17, 2021 at 2:04 PM Lincoln Lavoie <lylavoie@iol.unh.edu> wrote:
>
> Hi David,
>
> ABI testing was disable / stopped on Friday in the Community CI lab. Patches from before that for 21.11 would have still had the test run and could have failures listed. I'm not sure if there is a way to "remove" those failure marks from patchworks. But, for all new patches since then, ABI hasn't been run.
I can see new reports for ABI, please can someone from UNH double check?
https://lab.dpdk.org/results/dashboard/patchsets/18301/
https://lab.dpdk.org/results/dashboard/patchsets/18303/
https://lab.dpdk.org/results/dashboard/patchsets/18325/
--
David Marchand
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] doc: abstract the behaviour of rte_ctrl_thread_create
2021-08-23 9:40 3% ` Olivier Matz
@ 2021-08-23 21:18 0% ` Honnappa Nagarahalli
0 siblings, 0 replies; 200+ results
From: Honnappa Nagarahalli @ 2021-08-23 21:18 UTC (permalink / raw)
To: Olivier Matz
Cc: thomas, dev, lucp.at.work, david.marchand, Ruifeng Wang, nd, nd
<snip>
> > >
> > > 30/07/2021 23:44, Honnappa Nagarahalli:
> > > > The current expected behaviour of the function
> > > > rte_ctrl_thread_create is rigid which makes the implementation of the
> function complex.
> > > > Make the expected behaviour abstract to allow for simplified
> > > > implementation.
> > > >
> > > > With this change, the calls to pthread_setaffinity_np can be moved
> > > > to the control thread. This will avoid the use of
> > > > pthread_barrier_wait and simplify the synchronization mechanism
> > > > between rte_ctrl_thread_create and the calling thread.
> > > >
> > > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > ---
> > > > +* eal: The expected behaviour of the function
> > > > +``rte_ctrl_thread_create``
> > > > + abstracted to allow for simplified implementation. The new
> > > > +behaviour is
> > > > + as follows:
> > > > + Creates a control thread with the given name. The affinity of
> > > > +the new
> > > > + thread is based on the CPU affinity retrieved at the time
> > > > +rte_eal_init()
> > > > + was called, the dataplane and service lcores are then excluded.
> > >
> > > I don't understand what is different of the current API:
> > > * Wrapper to pthread_create(), pthread_setname_np() and
> > > * pthread_setaffinity_np(). The affinity of the new thread is based
> > > * on the CPU affinity retrieved at the time rte_eal_init() was
> > > called,
> > > * the dataplane and service lcores are then excluded.
> > My concern is for the word "Wrapper". I am not sure how much we are
> bound by that to keep the code as a "wrapper".
> > The new patch does not change the high level behavior.
>
> I am ok to remove the word "wrapper" from the description, and I agree it can
> be better described without quoting the pthread_* functions.
>
> > Are you saying you are ok with the patch without the deprecation notice?
>
> I don't think it requires a deprecation notice if the API and ABI is left
> unchanged. To be honnest, I find a bit hard to understand what is really
> changed by reading the deprecation notice:
Thanks Olivier. I agree, I was also not sure. The term "wrapper" made me feel that we are defining certain return codes to the application.
At the macro level, I think the expected behavior remains the same.
>
> > +* eal: The expected behaviour of the function
> > +``rte_ctrl_thread_create``
> > + abstracted to allow for simplified implementation. The new
> > +behaviour is
> > + as follows:
> > + Creates a control thread with the given name. The affinity of the
> > +new
> > + thread is based on the CPU affinity retrieved at the time
> > +rte_eal_init()
> > + was called, the dataplane and service lcores are then excluded.
>
> I'll send my comments to your patch:
> http://patches.dpdk.org/project/dpdk/patch/20210802051652.3611-1-
> honnappa.nagarahalli@arm.com/
>
>
> Thanks,
> Olivier
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [RFC 04/15] eventdev: move inline APIs into separate structure
@ 2021-08-23 19:40 2% ` pbhagavatula
2021-09-08 12:03 0% ` Kinsella, Ray
1 sibling, 1 reply; 200+ results
From: pbhagavatula @ 2021-08-23 19:40 UTC (permalink / raw)
To: jerinj, Ray Kinsella; +Cc: konstantin.ananyev, dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Move fastpath inline function pointers from rte_eventdev into a
separate structure accessed via a flat array.
The intension is to make rte_eventdev and related structures private
to avoid future API/ABI breakages.`
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
lib/eventdev/eventdev_pmd.h | 10 ++++
lib/eventdev/eventdev_private.c | 100 +++++++++++++++++++++++++++++++
lib/eventdev/meson.build | 1 +
lib/eventdev/rte_eventdev.c | 25 +++++++-
lib/eventdev/rte_eventdev_core.h | 44 ++++++++++++++
lib/eventdev/version.map | 4 ++
6 files changed, 183 insertions(+), 1 deletion(-)
create mode 100644 lib/eventdev/eventdev_private.c
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index a25d3f1fb5..5eaa29fe14 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -1193,6 +1193,16 @@ __rte_internal
int
rte_event_pmd_release(struct rte_eventdev *eventdev);
+/**
+ * Reset eventdevice fastpath APIs to dummy values.
+ *
+ * @param rba
+ * The *api* pointer to reset.
+ */
+__rte_internal
+void
+rte_event_dev_api_reset(struct rte_eventdev_api *api);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
new file mode 100644
index 0000000000..c60fd2b522
--- /dev/null
+++ b/lib/eventdev/eventdev_private.c
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "eventdev_pmd.h"
+#include "rte_eventdev.h"
+
+static uint16_t
+dummy_event_enqueue(__rte_unused uint8_t dev_id, __rte_unused uint8_t port_id,
+ __rte_unused const struct rte_event *ev)
+{
+ RTE_EDEV_LOG_ERR(
+ "event enqueue requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_enqueue_burst(__rte_unused uint8_t dev_id,
+ __rte_unused uint8_t port_id,
+ __rte_unused const struct rte_event ev[],
+ __rte_unused uint16_t nb_events)
+{
+ RTE_EDEV_LOG_ERR(
+ "event enqueue burst requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_dequeue(__rte_unused uint8_t dev_id, __rte_unused uint8_t port_id,
+ __rte_unused struct rte_event *ev,
+ __rte_unused uint64_t timeout_ticks)
+{
+ RTE_EDEV_LOG_ERR(
+ "event dequeue requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_dequeue_burst(__rte_unused uint8_t dev_id,
+ __rte_unused uint8_t port_id,
+ __rte_unused struct rte_event ev[],
+ __rte_unused uint16_t nb_events,
+ __rte_unused uint64_t timeout_ticks)
+{
+ RTE_EDEV_LOG_ERR(
+ "event dequeue burst requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_tx_adapter_enqueue(__rte_unused uint8_t dev_id,
+ __rte_unused uint8_t port_id,
+ __rte_unused struct rte_event ev[],
+ __rte_unused uint16_t nb_events)
+{
+ RTE_EDEV_LOG_ERR(
+ "event Tx adapter enqueue requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_tx_adapter_enqueue_same_dest(__rte_unused uint8_t dev_id,
+ __rte_unused uint8_t port_id,
+ __rte_unused struct rte_event ev[],
+ __rte_unused uint16_t nb_events)
+{
+ RTE_EDEV_LOG_ERR(
+ "event Tx adapter enqueue same destination requested for unconfigured event device");
+ return 0;
+}
+
+static uint16_t
+dummy_event_crypto_adapter_enqueue(__rte_unused uint8_t dev_id,
+ __rte_unused uint8_t port_id,
+ __rte_unused struct rte_event ev[],
+ __rte_unused uint16_t nb_events)
+{
+ RTE_EDEV_LOG_ERR(
+ "event crypto adapter enqueue requested for unconfigured event device");
+ return 0;
+}
+
+void
+rte_event_dev_api_reset(struct rte_eventdev_api *api)
+{
+ static const struct rte_eventdev_api dummy = {
+ .enqueue = dummy_event_enqueue,
+ .enqueue_burst = dummy_event_enqueue_burst,
+ .enqueue_new_burst = dummy_event_enqueue_burst,
+ .enqueue_forward_burst = dummy_event_enqueue_burst,
+ .dequeue = dummy_event_dequeue,
+ .dequeue_burst = dummy_event_dequeue_burst,
+ .txa_enqueue = dummy_event_tx_adapter_enqueue,
+ .txa_enqueue_same_dest =
+ dummy_event_tx_adapter_enqueue_same_dest,
+ .ca_enqueue = dummy_event_crypto_adapter_enqueue,
+ };
+
+ *api = dummy;
+}
diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build
index 8b51fde361..9051ff04b7 100644
--- a/lib/eventdev/meson.build
+++ b/lib/eventdev/meson.build
@@ -8,6 +8,7 @@ else
endif
sources = files(
+ 'eventdev_private.c',
'rte_eventdev.c',
'rte_event_ring.c',
'eventdev_trace_points.c',
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 21c5c55086..5ff8596788 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -44,6 +44,9 @@ static struct rte_eventdev_global eventdev_globals = {
.nb_devs = 0
};
+/* Public fastpath APIs. */
+struct rte_eventdev_api *rte_eventdev_api;
+
/* Event dev north bound API implementation */
uint8_t
@@ -394,8 +397,9 @@ int
rte_event_dev_configure(uint8_t dev_id,
const struct rte_event_dev_config *dev_conf)
{
- struct rte_eventdev *dev;
struct rte_event_dev_info info;
+ struct rte_eventdev_api api;
+ struct rte_eventdev *dev;
int diag;
RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
@@ -564,10 +568,14 @@ rte_event_dev_configure(uint8_t dev_id,
return diag;
}
+ api = rte_eventdev_api[dev_id];
+ rte_event_dev_api_reset(&api);
+
/* Configure the device */
diag = (*dev->dev_ops->dev_configure)(dev);
if (diag != 0) {
RTE_EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag);
+ rte_event_dev_api_reset(&api);
rte_event_dev_queue_config(dev, 0);
rte_event_dev_port_config(dev, 0);
}
@@ -1396,6 +1404,7 @@ rte_event_dev_close(uint8_t dev_id)
return -EBUSY;
}
+ rte_event_dev_api_reset(&rte_eventdev_api[dev_id]);
rte_eventdev_trace_close(dev_id);
return (*dev->dev_ops->dev_close)(dev);
}
@@ -1479,6 +1488,20 @@ rte_event_pmd_allocate(const char *name, int socket_id)
}
}
+ if (rte_eventdev_api == NULL) {
+ rte_eventdev_api = rte_zmalloc("Eventdev_api",
+ sizeof(struct rte_eventdev_api) *
+ RTE_EVENT_MAX_DEVS,
+ RTE_CACHE_LINE_SIZE);
+ if (rte_eventdev_api == NULL) {
+ RTE_EDEV_LOG_ERR(
+ "Unable to allocate memory for fastpath eventdev API array");
+ return NULL;
+ }
+ for (dev_id = 0; dev_id < RTE_EVENT_MAX_DEVS; dev_id++)
+ rte_event_dev_api_reset(&rte_eventdev_api[dev_id]);
+ }
+
if (rte_event_pmd_get_named_dev(name) != NULL) {
RTE_EDEV_LOG_ERR("Event device with name %s already "
"allocated!", name);
diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
index 97dfec1ae1..4a7edacb0e 100644
--- a/lib/eventdev/rte_eventdev_core.h
+++ b/lib/eventdev/rte_eventdev_core.h
@@ -12,23 +12,39 @@
extern "C" {
#endif
+typedef uint16_t (*rte_event_enqueue_t)(uint8_t dev_id, uint8_t port_id,
+ const struct rte_event *ev);
typedef uint16_t (*event_enqueue_t)(void *port, const struct rte_event *ev);
/**< @internal Enqueue event on port of a device */
+typedef uint16_t (*rte_event_enqueue_burst_t)(uint8_t dev_id, uint8_t port_id,
+ const struct rte_event ev[],
+ uint16_t nb_events);
typedef uint16_t (*event_enqueue_burst_t)(void *port,
const struct rte_event ev[],
uint16_t nb_events);
/**< @internal Enqueue burst of events on port of a device */
+typedef uint16_t (*rte_event_dequeue_t)(uint8_t dev_id, uint8_t port_id,
+ struct rte_event *ev,
+ uint64_t timeout_ticks);
typedef uint16_t (*event_dequeue_t)(void *port, struct rte_event *ev,
uint64_t timeout_ticks);
/**< @internal Dequeue event from port of a device */
+typedef uint16_t (*rte_event_dequeue_burst_t)(uint8_t dev_id, uint8_t port_id,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
typedef uint16_t (*event_dequeue_burst_t)(void *port, struct rte_event ev[],
uint16_t nb_events,
uint64_t timeout_ticks);
/**< @internal Dequeue burst of events from port of a device */
+typedef uint16_t (*rte_event_tx_adapter_enqueue_t)(uint8_t dev_id,
+ uint8_t port_id,
+ struct rte_event ev[],
+ uint16_t nb_events);
typedef uint16_t (*event_tx_adapter_enqueue)(void *port, struct rte_event ev[],
uint16_t nb_events);
/**< @internal Enqueue burst of events on port of a device */
@@ -40,11 +56,39 @@ typedef uint16_t (*event_tx_adapter_enqueue_same_dest)(void *port,
* burst having same destination Ethernet port & Tx queue.
*/
+typedef uint16_t (*rte_event_crypto_adapter_enqueue_t)(uint8_t dev_id,
+ uint8_t port_id,
+ struct rte_event ev[],
+ uint16_t nb_events);
typedef uint16_t (*event_crypto_adapter_enqueue)(void *port,
struct rte_event ev[],
uint16_t nb_events);
/**< @internal Enqueue burst of events on crypto adapter */
+struct rte_eventdev_api {
+ rte_event_enqueue_t enqueue;
+ /**< PMD enqueue function. */
+ rte_event_enqueue_burst_t enqueue_burst;
+ /**< PMD enqueue burst function. */
+ rte_event_enqueue_burst_t enqueue_new_burst;
+ /**< PMD enqueue burst new function. */
+ rte_event_enqueue_burst_t enqueue_forward_burst;
+ /**< PMD enqueue burst fwd function. */
+ rte_event_dequeue_t dequeue;
+ /**< PMD dequeue function. */
+ rte_event_dequeue_burst_t dequeue_burst;
+ /**< PMD dequeue burst function. */
+ rte_event_tx_adapter_enqueue_t txa_enqueue;
+ /**< PMD Tx adapter enqueue function. */
+ rte_event_tx_adapter_enqueue_t txa_enqueue_same_dest;
+ /**< PMD Tx adapter enqueue same destination function. */
+ rte_event_crypto_adapter_enqueue_t ca_enqueue;
+ /**< PMD Crypto adapter enqueue function. */
+ uintptr_t reserved[2];
+} __rte_cache_aligned;
+
+extern struct rte_eventdev_api *rte_eventdev_api;
+
#define RTE_EVENTDEV_NAME_MAX_LEN (64)
/**< @internal Max length of name of event PMD */
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index 5f1fe412a4..bc2912dcfd 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -85,6 +85,9 @@ DPDK_22 {
rte_event_timer_cancel_burst;
rte_eventdevs;
+ #added in 21.11
+ rte_eventdev_api;
+
local: *;
};
@@ -141,6 +144,7 @@ EXPERIMENTAL {
INTERNAL {
global:
+ rte_event_dev_api_reset;
rte_event_pmd_selftest_seqn_dynfield_offset;
rte_event_pmd_allocate;
rte_event_pmd_get_named_dev;
--
2.17.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v8] eal: remove sys/queue.h from public headers.
@ 2021-08-23 13:03 1% ` William Tu
2021-08-24 16:21 1% ` [dpdk-dev] [PATCH v9] " William Tu
0 siblings, 1 reply; 200+ results
From: William Tu @ 2021-08-23 13:03 UTC (permalink / raw)
To: dev; +Cc: Dmitry.Kozliuk, Nick Connolly
Currently there are some public headers that include 'sys/queue.h', which
is not POSIX, but usually provided by the Linux/BSD system library.
(Not in POSIX.1, POSIX.1-2001, or POSIX.1-2008. Present on the BSDs.)
The file is missing on Windows. During the Windows build, DPDK uses a
bundled copy, so building a DPDK library works fine. But when OVS or other
applications use DPDK as a library, because some DPDK public headers
include 'sys/queue.h', on Windows, it triggers an error due to no such
file.
One solution is to install the 'lib/eal/windows/include/sys/queue.h' into
Windows environment, such as [1]. However, this means DPDK exports the
functionalities of 'sys/queue.h' into the environment, which might cause
symbols, macros, headers clashing with other applications.
The patch fixes it by removing the "#include <sys/queue.h>" from
DPDK public headers, so programs including DPDK headers don't depend
on the system to provide 'sys/queue.h'. When these public headers use
macros such as TAILQ_xxx, we replace it by the ones with RTE_ prefix.
For Windows, we copy the definitions from <sys/queue.h> to rte_os.h
in Windows EAL. Note that these RTE_ macros are compatible with
<sys/queue.h>, both at the level of API (to use with <sys/queue.h>
macros in C files) and ABI (to avoid breaking it).
Additionally, the TAILQ_FOREACH_SAFE is not part of <sys/queue.h>,
the patch replaces it with RTE_TAILQ_FOREACH_SAFE.
[1] http://mails.dpdk.org/archives/dev/2021-August/216304.html
Suggested-by: Nick Connolly <nick.connolly@mayadata.io>
Suggested-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
Signed-off-by: William Tu <u9012063@gmail.com>
---
v7-v8:
* remove duplicate RTE_TAILQ_FOREACH_SAFE at rte_os.h
put the macro at rte_tailq.h
* remove inline comments
* diff
https://github.com/williamtu/dpdk/compare/a4144ff11b..6cb7cd8daf
v6-v7:
* remove some redundant "#incldue <sys/queue.h>"
* remove extra newline, add comment at rte_os.h for windows
use of bundled sys/queue
v5-v6:
* fix tab/indent issue, fix type and spelling
* fix duplicate RTE_TAILQ_FOREACH_SAFE
* fix build error due to drivers/net/mlx5/mlx5_flow_meter.c
---
drivers/bus/auxiliary/private.h | 1 +
drivers/bus/auxiliary/rte_bus_auxiliary.h | 5 ++--
drivers/bus/dpaa/dpaa_bus.c | 4 ++--
drivers/bus/fslmc/fslmc_bus.c | 4 ++--
drivers/bus/fslmc/fslmc_vfio.c | 9 +++++---
drivers/bus/ifpga/rte_bus_ifpga.h | 8 +++----
drivers/bus/pci/pci_params.c | 2 ++
drivers/bus/pci/rte_bus_pci.h | 13 +++++------
drivers/bus/pci/windows/pci.c | 3 +++
drivers/bus/pci/windows/pci_netuio.c | 2 ++
drivers/bus/vdev/rte_bus_vdev.h | 7 +++---
drivers/bus/vdev/vdev.c | 3 ++-
drivers/bus/vmbus/rte_bus_vmbus.h | 13 +++++------
drivers/net/bnxt/tf_ulp/bnxt_ulp.c | 2 +-
drivers/net/bonding/rte_eth_bond_flow.c | 2 +-
drivers/net/failsafe/failsafe_flow.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 9 ++++----
drivers/net/i40e/i40e_ethdev.h | 1 +
drivers/net/i40e/i40e_flow.c | 6 ++---
drivers/net/i40e/i40e_hash.c | 2 +-
drivers/net/i40e/rte_pmd_i40e.c | 6 ++---
drivers/net/iavf/iavf_generic_flow.c | 14 +++++------
drivers/net/ice/ice_dcf_ethdev.c | 1 +
drivers/net/ice/ice_ethdev.c | 4 ++--
drivers/net/ice/ice_generic_flow.c | 14 +++++------
drivers/net/ipn3ke/ipn3ke_flow.c | 2 +-
drivers/net/mlx5/mlx5_flow_dv.c | 2 +-
drivers/net/mlx5/mlx5_flow_meter.c | 2 +-
drivers/net/softnic/rte_eth_softnic_flow.c | 3 ++-
drivers/net/softnic/rte_eth_softnic_swq.c | 2 +-
drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 2 +-
lib/bbdev/rte_bbdev.h | 2 +-
lib/cryptodev/rte_cryptodev.h | 2 +-
lib/cryptodev/rte_cryptodev_pmd.h | 2 +-
lib/eal/common/eal_common_devargs.c | 4 ++--
lib/eal/common/eal_common_log.c | 1 +
lib/eal/common/eal_common_options.c | 2 +-
lib/eal/common/eal_private.h | 1 +
lib/eal/freebsd/include/rte_os.h | 10 ++++++++
lib/eal/include/rte_bus.h | 5 ++--
lib/eal/include/rte_class.h | 6 ++---
lib/eal/include/rte_dev.h | 5 ++--
lib/eal/include/rte_devargs.h | 3 +--
lib/eal/include/rte_log.h | 1 -
lib/eal/include/rte_service.h | 1 -
lib/eal/include/rte_tailq.h | 15 ++++++------
lib/eal/linux/include/rte_os.h | 10 ++++++++
lib/eal/windows/eal_alarm.c | 1 +
lib/eal/windows/include/rte_os.h | 27 ++++++++++++++++++++++
lib/efd/rte_efd.c | 2 +-
lib/ethdev/rte_ethdev_core.h | 2 +-
lib/hash/rte_fbk_hash.h | 1 -
lib/hash/rte_thash.c | 2 ++
lib/ip_frag/rte_ip_frag.h | 4 ++--
lib/mempool/rte_mempool.c | 2 +-
lib/mempool/rte_mempool.h | 9 ++++----
lib/pci/rte_pci.h | 1 -
lib/ring/rte_ring_core.h | 1 -
lib/table/rte_swx_table.h | 7 +++---
lib/table/rte_swx_table_selector.h | 5 ++--
lib/vhost/iotlb.c | 11 +++++----
lib/vhost/rte_vdpa_dev.h | 2 +-
lib/vhost/vdpa.c | 2 +-
63 files changed, 176 insertions(+), 123 deletions(-)
diff --git a/drivers/bus/auxiliary/private.h b/drivers/bus/auxiliary/private.h
index 9987e8b501..d22e83cf7a 100644
--- a/drivers/bus/auxiliary/private.h
+++ b/drivers/bus/auxiliary/private.h
@@ -7,6 +7,7 @@
#include <stdbool.h>
#include <stdio.h>
+#include <sys/queue.h>
#include "rte_bus_auxiliary.h"
diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h
index 2462bad2ba..b1f5610404 100644
--- a/drivers/bus/auxiliary/rte_bus_auxiliary.h
+++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h
@@ -19,7 +19,6 @@ extern "C" {
#include <stdlib.h>
#include <limits.h>
#include <errno.h>
-#include <sys/queue.h>
#include <stdint.h>
#include <inttypes.h>
@@ -113,7 +112,7 @@ typedef int (rte_auxiliary_dma_unmap_t)(struct rte_auxiliary_device *dev,
* A structure describing an auxiliary device.
*/
struct rte_auxiliary_device {
- TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
+ RTE_TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
struct rte_device device; /**< Inherit core device */
char name[RTE_DEV_NAME_MAX_LEN + 1]; /**< ASCII device name */
struct rte_intr_handle intr_handle; /**< Interrupt handle */
@@ -124,7 +123,7 @@ struct rte_auxiliary_device {
* A structure describing an auxiliary driver.
*/
struct rte_auxiliary_driver {
- TAILQ_ENTRY(rte_auxiliary_driver) next; /**< Next in list. */
+ RTE_TAILQ_ENTRY(rte_auxiliary_driver) next; /**< Next in list. */
struct rte_driver driver; /**< Inherit core driver. */
struct rte_auxiliary_bus *bus; /**< Auxiliary bus reference. */
rte_auxiliary_match_t *match; /**< Device match function. */
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index e499305d85..6cab2ae760 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -105,7 +105,7 @@ dpaa_add_to_device_list(struct rte_dpaa_device *newdev)
struct rte_dpaa_device *dev = NULL;
struct rte_dpaa_device *tdev = NULL;
- TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+ RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
comp = compare_dpaa_devices(newdev, dev);
if (comp < 0) {
TAILQ_INSERT_BEFORE(dev, newdev, next);
@@ -245,7 +245,7 @@ dpaa_clean_device_list(void)
struct rte_dpaa_device *dev = NULL;
struct rte_dpaa_device *tdev = NULL;
- TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+ RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
free(dev);
dev = NULL;
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index becc455f6b..8c8f8a298d 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -45,7 +45,7 @@ cleanup_fslmc_device_list(void)
struct rte_dpaa2_device *dev;
struct rte_dpaa2_device *t_dev;
- TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
+ RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
free(dev);
dev = NULL;
@@ -82,7 +82,7 @@ insert_in_device_list(struct rte_dpaa2_device *newdev)
struct rte_dpaa2_device *dev = NULL;
struct rte_dpaa2_device *tdev = NULL;
- TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, tdev) {
+ RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, tdev) {
comp = compare_dpaa2_devname(newdev, dev);
if (comp < 0) {
TAILQ_INSERT_BEFORE(dev, newdev, next);
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index c8373e627a..852fcfc4dd 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -808,7 +808,8 @@ fslmc_vfio_process_group(void)
bool is_dpmcp_in_blocklist = false, is_dpio_in_blocklist = false;
int dpmcp_count = 0, dpio_count = 0, current_device;
- TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+ RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+ dev_temp) {
if (dev->dev_type == DPAA2_MPORTAL) {
dpmcp_count++;
if (dev->device.devargs &&
@@ -825,7 +826,8 @@ fslmc_vfio_process_group(void)
/* Search the MCP as that should be initialized first. */
current_device = 0;
- TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+ RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+ dev_temp) {
if (dev->dev_type == DPAA2_MPORTAL) {
current_device++;
if (dev->device.devargs &&
@@ -872,7 +874,8 @@ fslmc_vfio_process_group(void)
}
current_device = 0;
- TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+ RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+ dev_temp) {
if (dev->dev_type == DPAA2_IO)
current_device++;
if (dev->device.devargs &&
diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h
index b43084155a..a85e90d384 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga.h
+++ b/drivers/bus/ifpga/rte_bus_ifpga.h
@@ -28,9 +28,9 @@ struct rte_afu_device;
struct rte_afu_driver;
/** Double linked list of Intel FPGA AFU device. */
-TAILQ_HEAD(ifpga_afu_dev_list, rte_afu_device);
+RTE_TAILQ_HEAD(ifpga_afu_dev_list, rte_afu_device);
/** Double linked list of Intel FPGA AFU device drivers. */
-TAILQ_HEAD(ifpga_afu_drv_list, rte_afu_driver);
+RTE_TAILQ_HEAD(ifpga_afu_drv_list, rte_afu_driver);
#define IFPGA_BUS_BITSTREAM_PATH_MAX_LEN 256
@@ -71,7 +71,7 @@ struct rte_afu_shared {
* A structure describing a AFU device.
*/
struct rte_afu_device {
- TAILQ_ENTRY(rte_afu_device) next; /**< Next in device list. */
+ RTE_TAILQ_ENTRY(rte_afu_device) next; /**< Next in device list. */
struct rte_device device; /**< Inherit core device */
struct rte_rawdev *rawdev; /**< Point Rawdev */
struct rte_afu_id id; /**< AFU id within FPGA. */
@@ -105,7 +105,7 @@ typedef int (afu_remove_t)(struct rte_afu_device *);
* A structure describing a AFU device.
*/
struct rte_afu_driver {
- TAILQ_ENTRY(rte_afu_driver) next; /**< Next afu driver. */
+ RTE_TAILQ_ENTRY(rte_afu_driver) next; /**< Next afu driver. */
struct rte_driver driver; /**< Inherit core driver. */
afu_probe_t *probe; /**< Device Probe function. */
afu_remove_t *remove; /**< Device Remove function. */
diff --git a/drivers/bus/pci/pci_params.c b/drivers/bus/pci/pci_params.c
index 3192e9c967..717388753d 100644
--- a/drivers/bus/pci/pci_params.c
+++ b/drivers/bus/pci/pci_params.c
@@ -2,6 +2,8 @@
* Copyright 2018 Gaëtan Rivet
*/
+#include <sys/queue.h>
+
#include <rte_bus.h>
#include <rte_bus_pci.h>
#include <rte_dev.h>
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 583470e831..673a2850c1 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -19,7 +19,6 @@ extern "C" {
#include <stdlib.h>
#include <limits.h>
#include <errno.h>
-#include <sys/queue.h>
#include <stdint.h>
#include <inttypes.h>
@@ -37,16 +36,16 @@ struct rte_pci_device;
struct rte_pci_driver;
/** List of PCI devices */
-TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
+RTE_TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
/** List of PCI drivers */
-TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
+RTE_TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
/* PCI Bus iterators */
#define FOREACH_DEVICE_ON_PCIBUS(p) \
- TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
+ RTE_TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
#define FOREACH_DRIVER_ON_PCIBUS(p) \
- TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
+ RTE_TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
struct rte_devargs;
@@ -64,7 +63,7 @@ enum rte_pci_kernel_driver {
* A structure describing a PCI device.
*/
struct rte_pci_device {
- TAILQ_ENTRY(rte_pci_device) next; /**< Next probed PCI device. */
+ RTE_TAILQ_ENTRY(rte_pci_device) next; /**< Next probed PCI device. */
struct rte_device device; /**< Inherit core device */
struct rte_pci_addr addr; /**< PCI location. */
struct rte_pci_id id; /**< PCI ID. */
@@ -160,7 +159,7 @@ typedef int (pci_dma_unmap_t)(struct rte_pci_device *dev, void *addr,
* A structure describing a PCI driver.
*/
struct rte_pci_driver {
- TAILQ_ENTRY(rte_pci_driver) next; /**< Next in list. */
+ RTE_TAILQ_ENTRY(rte_pci_driver) next; /**< Next in list. */
struct rte_driver driver; /**< Inherit core driver. */
struct rte_pci_bus *bus; /**< PCI bus reference. */
rte_pci_probe_t *probe; /**< Device probe function. */
diff --git a/drivers/bus/pci/windows/pci.c b/drivers/bus/pci/windows/pci.c
index d39a7748b8..d7bd5d6e80 100644
--- a/drivers/bus/pci/windows/pci.c
+++ b/drivers/bus/pci/windows/pci.c
@@ -1,6 +1,9 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright 2020 Mellanox Technologies, Ltd
*/
+
+#include <sys/queue.h>
+
#include <rte_windows.h>
#include <rte_errno.h>
#include <rte_log.h>
diff --git a/drivers/bus/pci/windows/pci_netuio.c b/drivers/bus/pci/windows/pci_netuio.c
index 1bf9133f71..a0b175a8fc 100644
--- a/drivers/bus/pci/windows/pci_netuio.c
+++ b/drivers/bus/pci/windows/pci_netuio.c
@@ -2,6 +2,8 @@
* Copyright(c) 2020 Intel Corporation.
*/
+#include <sys/queue.h>
+
#include <rte_windows.h>
#include <rte_errno.h>
#include <rte_log.h>
diff --git a/drivers/bus/vdev/rte_bus_vdev.h b/drivers/bus/vdev/rte_bus_vdev.h
index fc315d10fa..2856799953 100644
--- a/drivers/bus/vdev/rte_bus_vdev.h
+++ b/drivers/bus/vdev/rte_bus_vdev.h
@@ -15,12 +15,11 @@
extern "C" {
#endif
-#include <sys/queue.h>
#include <rte_dev.h>
#include <rte_devargs.h>
struct rte_vdev_device {
- TAILQ_ENTRY(rte_vdev_device) next; /**< Next attached vdev */
+ RTE_TAILQ_ENTRY(rte_vdev_device) next; /**< Next attached vdev */
struct rte_device device; /**< Inherit core device */
};
@@ -53,7 +52,7 @@ rte_vdev_device_args(const struct rte_vdev_device *dev)
}
/** Double linked list of virtual device drivers. */
-TAILQ_HEAD(vdev_driver_list, rte_vdev_driver);
+RTE_TAILQ_HEAD(vdev_driver_list, rte_vdev_driver);
/**
* Probe function called for each virtual device driver once.
@@ -107,7 +106,7 @@ typedef int (rte_vdev_dma_unmap_t)(struct rte_vdev_device *dev, void *addr,
* A virtual device driver abstraction.
*/
struct rte_vdev_driver {
- TAILQ_ENTRY(rte_vdev_driver) next; /**< Next in list. */
+ RTE_TAILQ_ENTRY(rte_vdev_driver) next; /**< Next in list. */
struct rte_driver driver; /**< Inherited general driver. */
rte_vdev_probe_t *probe; /**< Virtual device probe function. */
rte_vdev_remove_t *remove; /**< Virtual device remove function. */
diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
index 281a2c34e8..a8d8b2327e 100644
--- a/drivers/bus/vdev/vdev.c
+++ b/drivers/bus/vdev/vdev.c
@@ -100,7 +100,8 @@ rte_vdev_remove_custom_scan(rte_vdev_scan_callback callback, void *user_arg)
struct vdev_custom_scan *custom_scan, *tmp_scan;
rte_spinlock_lock(&vdev_custom_scan_lock);
- TAILQ_FOREACH_SAFE(custom_scan, &vdev_custom_scans, next, tmp_scan) {
+ RTE_TAILQ_FOREACH_SAFE(custom_scan, &vdev_custom_scans, next,
+ tmp_scan) {
if (custom_scan->callback != callback ||
(custom_scan->user_arg != (void *)-1 &&
custom_scan->user_arg != user_arg))
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 4cf73ce815..6bcff66468 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -20,7 +20,6 @@ extern "C" {
#include <limits.h>
#include <stdbool.h>
#include <errno.h>
-#include <sys/queue.h>
#include <stdint.h>
#include <inttypes.h>
@@ -38,15 +37,15 @@ struct rte_vmbus_bus;
struct vmbus_channel;
struct vmbus_mon_page;
-TAILQ_HEAD(rte_vmbus_device_list, rte_vmbus_device);
-TAILQ_HEAD(rte_vmbus_driver_list, rte_vmbus_driver);
+RTE_TAILQ_HEAD(rte_vmbus_device_list, rte_vmbus_device);
+RTE_TAILQ_HEAD(rte_vmbus_driver_list, rte_vmbus_driver);
/* VMBus iterators */
#define FOREACH_DEVICE_ON_VMBUS(p) \
- TAILQ_FOREACH(p, &(rte_vmbus_bus.device_list), next)
+ RTE_TAILQ_FOREACH(p, &(rte_vmbus_bus.device_list), next)
#define FOREACH_DRIVER_ON_VMBUS(p) \
- TAILQ_FOREACH(p, &(rte_vmbus_bus.driver_list), next)
+ RTE_TAILQ_FOREACH(p, &(rte_vmbus_bus.driver_list), next)
/** Maximum number of VMBUS resources. */
enum hv_uio_map {
@@ -62,7 +61,7 @@ enum hv_uio_map {
* A structure describing a VMBUS device.
*/
struct rte_vmbus_device {
- TAILQ_ENTRY(rte_vmbus_device) next; /**< Next probed VMBUS device */
+ RTE_TAILQ_ENTRY(rte_vmbus_device) next; /**< Next probed VMBUS device */
const struct rte_vmbus_driver *driver; /**< Associated driver */
struct rte_device device; /**< Inherit core device */
rte_uuid_t device_id; /**< VMBUS device id */
@@ -93,7 +92,7 @@ typedef int (vmbus_remove_t)(struct rte_vmbus_device *);
* A structure describing a VMBUS driver.
*/
struct rte_vmbus_driver {
- TAILQ_ENTRY(rte_vmbus_driver) next; /**< Next in list. */
+ RTE_TAILQ_ENTRY(rte_vmbus_driver) next; /**< Next in list. */
struct rte_driver driver;
struct rte_vmbus_bus *bus; /**< VM bus reference. */
vmbus_probe_t *probe; /**< Device Probe function. */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index dbf85e4eda..ac86b70caf 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -2018,7 +2018,7 @@ bnxt_ulp_cntxt_list_del(struct bnxt_ulp_context *ulp_ctx)
struct ulp_context_list_entry *entry, *temp;
rte_spinlock_lock(&bnxt_ulp_ctxt_lock);
- TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) {
+ RTE_TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) {
if (entry->ulp_ctx == ulp_ctx) {
TAILQ_REMOVE(&ulp_cntx_list, entry, next);
rte_free(entry);
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 417f76bf60..65b77faae7 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -157,7 +157,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
/* Destroy all bond flows from its slaves instead of flushing them to
* keep the LACP flow or any other external flows.
*/
- TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
+ RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
lret = bond_flow_destroy(dev, flow, err);
if (unlikely(lret != 0))
ret = lret;
diff --git a/drivers/net/failsafe/failsafe_flow.c b/drivers/net/failsafe/failsafe_flow.c
index 5e2b5f7c67..354f9fec20 100644
--- a/drivers/net/failsafe/failsafe_flow.c
+++ b/drivers/net/failsafe/failsafe_flow.c
@@ -180,7 +180,7 @@ fs_flow_flush(struct rte_eth_dev *dev,
return ret;
}
}
- TAILQ_FOREACH_SAFE(flow, &PRIV(dev)->flow_list, next, tmp) {
+ RTE_TAILQ_FOREACH_SAFE(flow, &PRIV(dev)->flow_list, next, tmp) {
TAILQ_REMOVE(&PRIV(dev)->flow_list, flow, next);
fs_flow_release(&flow);
}
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed1..6590363556 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -5436,7 +5436,7 @@ i40e_vsi_release(struct i40e_vsi *vsi)
/* VSI has child to attach, release child first */
if (vsi->veb) {
- TAILQ_FOREACH_SAFE(vsi_list, &vsi->veb->head, list, temp) {
+ RTE_TAILQ_FOREACH_SAFE(vsi_list, &vsi->veb->head, list, temp) {
if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS)
return -1;
}
@@ -5444,7 +5444,8 @@ i40e_vsi_release(struct i40e_vsi *vsi)
}
if (vsi->floating_veb) {
- TAILQ_FOREACH_SAFE(vsi_list, &vsi->floating_veb->head, list, temp) {
+ RTE_TAILQ_FOREACH_SAFE(vsi_list, &vsi->floating_veb->head,
+ list, temp) {
if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS)
return -1;
}
@@ -5452,7 +5453,7 @@ i40e_vsi_release(struct i40e_vsi *vsi)
/* Remove all macvlan filters of the VSI */
i40e_vsi_remove_all_macvlan_filter(vsi);
- TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
+ RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
rte_free(f);
if (vsi->type != I40E_VSI_MAIN &&
@@ -6055,7 +6056,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
i = 0;
/* Remove all existing mac */
- TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+ RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
mac_filter[i] = f->mac_info;
ret = i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr);
if (ret) {
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index cd6deabd60..374b73e4a7 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -6,6 +6,7 @@
#define _I40E_ETHDEV_H_
#include <stdint.h>
+#include <sys/queue.h>
#include <rte_time.h>
#include <rte_kvargs.h>
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 3c1570bd9c..e41a84f1d7 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -4917,7 +4917,7 @@ i40e_flow_flush_fdir_filter(struct i40e_pf *pf)
}
/* Delete FDIR flows in flow list. */
- TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
if (flow->filter_type == RTE_ETH_FILTER_FDIR) {
TAILQ_REMOVE(&pf->flow_list, flow, node);
}
@@ -4972,7 +4972,7 @@ i40e_flow_flush_ethertype_filter(struct i40e_pf *pf)
}
/* Delete ethertype flows in flow list. */
- TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
if (flow->filter_type == RTE_ETH_FILTER_ETHERTYPE) {
TAILQ_REMOVE(&pf->flow_list, flow, node);
rte_free(flow);
@@ -5000,7 +5000,7 @@ i40e_flow_flush_tunnel_filter(struct i40e_pf *pf)
}
/* Delete tunnel flows in flow list. */
- TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
if (flow->filter_type == RTE_ETH_FILTER_TUNNEL) {
TAILQ_REMOVE(&pf->flow_list, flow, node);
rte_free(flow);
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 1fb8c9abfc..6579b1a00b 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -1366,7 +1366,7 @@ i40e_hash_filter_flush(struct i40e_pf *pf)
{
struct rte_flow *flow, *next;
- TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, next) {
+ RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, next) {
if (flow->filter_type != RTE_ETH_FILTER_HASH)
continue;
diff --git a/drivers/net/i40e/rte_pmd_i40e.c b/drivers/net/i40e/rte_pmd_i40e.c
index 2e34140c5b..ec24046440 100644
--- a/drivers/net/i40e/rte_pmd_i40e.c
+++ b/drivers/net/i40e/rte_pmd_i40e.c
@@ -216,7 +216,7 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
void *temp;
/* remove all the MACs */
- TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+ RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
vlan_num = vsi->vlan_num;
filter_type = f->mac_info.filter_type;
if (filter_type == I40E_MACVLAN_PERFECT_MATCH ||
@@ -274,7 +274,7 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
void *temp;
/* restore all the MACs */
- TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+ RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
if (f->mac_info.filter_type == I40E_MACVLAN_PERFECT_MATCH ||
f->mac_info.filter_type == I40E_MACVLAN_HASH_MATCH) {
/**
@@ -563,7 +563,7 @@ rte_pmd_i40e_set_vf_mac_addr(uint16_t port, uint16_t vf_id,
rte_ether_addr_copy(mac_addr, &vf->mac_addr);
/* Remove all existing mac */
- TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
+ RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
if (i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr)
!= I40E_SUCCESS)
PMD_DRV_LOG(WARNING, "Delete MAC failed");
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index 1fe270fb22..b86d99e57d 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -1637,7 +1637,7 @@ iavf_flow_init(struct iavf_adapter *ad)
TAILQ_INIT(&vf->dist_parser_list);
rte_spinlock_init(&vf->flow_ops_lock);
- TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
if (engine->init == NULL) {
PMD_INIT_LOG(ERR, "Invalid engine type (%d)",
engine->type);
@@ -1663,7 +1663,7 @@ iavf_flow_uninit(struct iavf_adapter *ad)
struct iavf_flow_parser_node *p_parser;
void *temp;
- TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
if (engine->uninit)
engine->uninit(ad);
}
@@ -1733,7 +1733,7 @@ iavf_unregister_parser(struct iavf_flow_parser *parser,
if (list == NULL)
return;
- TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
if (p_parser->parser->engine->type == parser->engine->type) {
TAILQ_REMOVE(list, p_parser, node);
rte_free(p_parser);
@@ -1917,7 +1917,7 @@ iavf_parse_engine_create(struct iavf_adapter *ad,
void *temp;
void *meta = NULL;
- TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
if (parser_node->parser->parse_pattern_action(ad,
parser_node->parser->array,
parser_node->parser->array_len,
@@ -1946,7 +1946,7 @@ iavf_parse_engine_validate(struct iavf_adapter *ad,
void *temp;
void *meta = NULL;
- TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
if (parser_node->parser->parse_pattern_action(ad,
parser_node->parser->array,
parser_node->parser->array_len,
@@ -2089,7 +2089,7 @@ iavf_flow_is_valid(struct rte_flow *flow)
void *temp;
if (flow && flow->engine) {
- TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
if (engine == flow->engine)
return true;
}
@@ -2142,7 +2142,7 @@ iavf_flow_flush(struct rte_eth_dev *dev,
void *temp;
int ret = 0;
- TAILQ_FOREACH_SAFE(p_flow, &vf->flow_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(p_flow, &vf->flow_list, node, temp) {
ret = iavf_flow_destroy(dev, p_flow, error);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to flush flows");
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index cab7c4da87..629e88980d 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -4,6 +4,7 @@
#include <errno.h>
#include <stdbool.h>
+#include <sys/queue.h>
#include <sys/types.h>
#include <unistd.h>
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a4cd39c954..fadd5f2e5a 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1104,7 +1104,7 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
if (!vsi || !vsi->mac_num)
return -EINVAL;
- TAILQ_FOREACH_SAFE(m_f, &vsi->mac_list, next, temp) {
+ RTE_TAILQ_FOREACH_SAFE(m_f, &vsi->mac_list, next, temp) {
ret = ice_remove_mac_filter(vsi, &m_f->mac_info.mac_addr);
if (ret != ICE_SUCCESS) {
ret = -EINVAL;
@@ -1115,7 +1115,7 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
if (vsi->vlan_num == 0)
return 0;
- TAILQ_FOREACH_SAFE(v_f, &vsi->vlan_list, next, temp) {
+ RTE_TAILQ_FOREACH_SAFE(v_f, &vsi->vlan_list, next, temp) {
ret = ice_remove_vlan_filter(vsi, &v_f->vlan_info.vlan);
if (ret != ICE_SUCCESS) {
ret = -EINVAL;
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 66b5743abf..3e557efe0c 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1820,7 +1820,7 @@ ice_flow_init(struct ice_adapter *ad)
TAILQ_INIT(&pf->dist_parser_list);
rte_spinlock_init(&pf->flow_ops_lock);
- TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
if (engine->init == NULL) {
PMD_INIT_LOG(ERR, "Invalid engine type (%d)",
engine->type);
@@ -1846,7 +1846,7 @@ ice_flow_uninit(struct ice_adapter *ad)
struct ice_flow_parser_node *p_parser;
void *temp;
- TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
if (engine->uninit)
engine->uninit(ad);
}
@@ -1946,7 +1946,7 @@ ice_unregister_parser(struct ice_flow_parser *parser,
if (list == NULL)
return;
- TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
if (p_parser->parser->engine->type == parser->engine->type) {
TAILQ_REMOVE(list, p_parser, node);
rte_free(p_parser);
@@ -2272,7 +2272,7 @@ ice_parse_engine_create(struct ice_adapter *ad,
void *meta = NULL;
void *temp;
- TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
int ret;
if (parser_node->parser->parse_pattern_action(ad,
@@ -2305,7 +2305,7 @@ ice_parse_engine_validate(struct ice_adapter *ad,
struct ice_flow_parser_node *parser_node;
void *temp;
- TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
if (parser_node->parser->parse_pattern_action(ad,
parser_node->parser->array,
parser_node->parser->array_len,
@@ -2477,7 +2477,7 @@ ice_flow_flush(struct rte_eth_dev *dev,
void *temp;
int ret = 0;
- TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
ret = ice_flow_destroy(dev, p_flow, error);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to flush flows");
@@ -2541,7 +2541,7 @@ ice_flow_redirect(struct ice_adapter *ad,
rte_spinlock_lock(&pf->flow_ops_lock);
- TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
if (!p_flow->engine->redirect)
continue;
ret = p_flow->engine->redirect(ad, p_flow, rd);
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index c702e19ea5..f5867ca055 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -1231,7 +1231,7 @@ ipn3ke_flow_flush(struct rte_eth_dev *dev,
struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev);
struct rte_flow *flow, *temp;
- TAILQ_FOREACH_SAFE(flow, &hw->flow_list, next, temp) {
+ RTE_TAILQ_FOREACH_SAFE(flow, &hw->flow_list, next, temp) {
TAILQ_REMOVE(&hw->flow_list, flow, next);
rte_free(flow);
}
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 31d857030f..ba2bf4de37 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -15099,7 +15099,7 @@ __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev,
policy->act_cnt[i].fate_action == MLX5_FLOW_FATE_MTR)
next_fm = mlx5_flow_meter_find(priv,
policy->act_cnt[i].next_mtr_id, NULL);
- TAILQ_FOREACH_SAFE(color_rule, &sub_policy->color_rules[i],
+ RTE_TAILQ_FOREACH_SAFE(color_rule, &sub_policy->color_rules[i],
next_port, tmp) {
claim_zero(mlx5_flow_os_destroy_flow(color_rule->rule));
tbl = container_of(color_rule->matcher->tbl,
diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c
index a24bd9c7ae..ba4e9fca17 100644
--- a/drivers/net/mlx5/mlx5_flow_meter.c
+++ b/drivers/net/mlx5/mlx5_flow_meter.c
@@ -2168,7 +2168,7 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error)
priv->mtr_idx_tbl = NULL;
}
} else {
- TAILQ_FOREACH_SAFE(legacy_fm, fms, next, tmp) {
+ RTE_TAILQ_FOREACH_SAFE(legacy_fm, fms, next, tmp) {
fm = &legacy_fm->fm;
if (mlx5_flow_meter_params_flush(dev, fm, 0))
return -rte_mtr_error_set(error, EINVAL,
diff --git a/drivers/net/softnic/rte_eth_softnic_flow.c b/drivers/net/softnic/rte_eth_softnic_flow.c
index 27eaf380cd..7d054c38d2 100644
--- a/drivers/net/softnic/rte_eth_softnic_flow.c
+++ b/drivers/net/softnic/rte_eth_softnic_flow.c
@@ -2207,7 +2207,8 @@ pmd_flow_flush(struct rte_eth_dev *dev,
void *temp;
int status;
- TAILQ_FOREACH_SAFE(flow, &table->flows, node, temp) {
+ RTE_TAILQ_FOREACH_SAFE(flow, &table->flows, node,
+ temp) {
/* Rule delete. */
status = softnic_pipeline_table_rule_delete
(softnic,
diff --git a/drivers/net/softnic/rte_eth_softnic_swq.c b/drivers/net/softnic/rte_eth_softnic_swq.c
index 2083d0a976..afe6f05e29 100644
--- a/drivers/net/softnic/rte_eth_softnic_swq.c
+++ b/drivers/net/softnic/rte_eth_softnic_swq.c
@@ -39,7 +39,7 @@ softnic_softnic_swq_free_keep_rxq_txq(struct pmd_internals *p)
{
struct softnic_swq *swq, *tswq;
- TAILQ_FOREACH_SAFE(swq, &p->swq_list, node, tswq) {
+ RTE_TAILQ_FOREACH_SAFE(swq, &p->swq_list, node, tswq) {
if ((strncmp(swq->name, "RXQ", strlen("RXQ")) == 0) ||
(strncmp(swq->name, "TXQ", strlen("TXQ")) == 0))
continue;
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index c961e18d67..7b80370b36 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -1606,7 +1606,7 @@ remove_hw_queues_from_list(struct dpaa2_dpdmai_dev *dpdmai_dev)
DPAA2_QDMA_FUNC_TRACE();
- TAILQ_FOREACH_SAFE(queue, &qdma_queue_list, next, tqueue) {
+ RTE_TAILQ_FOREACH_SAFE(queue, &qdma_queue_list, next, tqueue) {
if (queue->dpdmai_dev == dpdmai_dev) {
TAILQ_REMOVE(&qdma_queue_list, queue, next);
rte_free(queue);
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 7017124414..3ebf62e697 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -434,7 +434,7 @@ struct rte_bbdev_callback;
struct rte_intr_handle;
/** Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
+RTE_TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
/**
* @internal The data structure associated with a device. Drivers can access
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 11f4e6fdbf..f86bf2260b 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -879,7 +879,7 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
struct rte_cryptodev_callback;
/** Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+RTE_TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
/**
* Structure used to hold information about the callbacks to be called for a
diff --git a/lib/cryptodev/rte_cryptodev_pmd.h b/lib/cryptodev/rte_cryptodev_pmd.h
index 1274436870..9542cbf263 100644
--- a/lib/cryptodev/rte_cryptodev_pmd.h
+++ b/lib/cryptodev/rte_cryptodev_pmd.h
@@ -66,7 +66,7 @@ struct rte_cryptodev_global {
/* Cryptodev driver, containing the driver ID */
struct cryptodev_driver {
- TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
+ RTE_TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
const struct rte_driver *driver;
uint8_t id;
};
diff --git a/lib/eal/common/eal_common_devargs.c b/lib/eal/common/eal_common_devargs.c
index 23aaf8b7e4..2e2f35c47e 100644
--- a/lib/eal/common/eal_common_devargs.c
+++ b/lib/eal/common/eal_common_devargs.c
@@ -291,7 +291,7 @@ rte_devargs_insert(struct rte_devargs **da)
if (*da == NULL || (*da)->bus == NULL)
return -1;
- TAILQ_FOREACH_SAFE(listed_da, &devargs_list, next, tmp) {
+ RTE_TAILQ_FOREACH_SAFE(listed_da, &devargs_list, next, tmp) {
if (listed_da == *da)
/* devargs already in the list */
return 0;
@@ -358,7 +358,7 @@ rte_devargs_remove(struct rte_devargs *devargs)
if (devargs == NULL || devargs->bus == NULL)
return -1;
- TAILQ_FOREACH_SAFE(d, &devargs_list, next, tmp) {
+ RTE_TAILQ_FOREACH_SAFE(d, &devargs_list, next, tmp) {
if (strcmp(d->bus->name, devargs->bus->name) == 0 &&
strcmp(d->name, devargs->name) == 0) {
TAILQ_REMOVE(&devargs_list, d, next);
diff --git a/lib/eal/common/eal_common_log.c b/lib/eal/common/eal_common_log.c
index ec8fe23a7f..1be35f5397 100644
--- a/lib/eal/common/eal_common_log.c
+++ b/lib/eal/common/eal_common_log.c
@@ -10,6 +10,7 @@
#include <errno.h>
#include <regex.h>
#include <fnmatch.h>
+#include <sys/queue.h>
#include <rte_eal.h>
#include <rte_log.h>
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index ff5861b5f3..24f5ceaab0 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -283,7 +283,7 @@ eal_option_device_parse(void)
void *tmp;
int ret = 0;
- TAILQ_FOREACH_SAFE(devopt, &devopt_list, next, tmp) {
+ RTE_TAILQ_FOREACH_SAFE(devopt, &devopt_list, next, tmp) {
if (ret == 0) {
ret = rte_devargs_add(devopt->type, devopt->arg);
if (ret)
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 64cf4e81c8..86dab1f057 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -8,6 +8,7 @@
#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>
+#include <sys/queue.h>
#include <rte_dev.h>
#include <rte_lcore.h>
diff --git a/lib/eal/freebsd/include/rte_os.h b/lib/eal/freebsd/include/rte_os.h
index 627f0483ab..9d8a69008c 100644
--- a/lib/eal/freebsd/include/rte_os.h
+++ b/lib/eal/freebsd/include/rte_os.h
@@ -11,6 +11,16 @@
*/
#include <pthread_np.h>
+#include <sys/queue.h>
+
+/* These macros are compatible with system's sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
+#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
+#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
+#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
+#define RTE_TAILQ_NEXT(elem, field) TAILQ_NEXT(elem, field)
+#define RTE_STAILQ_HEAD(name, type) STAILQ_HEAD(name, type)
+#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)
typedef cpuset_t rte_cpuset_t;
#define RTE_HAS_CPUSET
diff --git a/lib/eal/include/rte_bus.h b/lib/eal/include/rte_bus.h
index 80b154fb98..84d364df3f 100644
--- a/lib/eal/include/rte_bus.h
+++ b/lib/eal/include/rte_bus.h
@@ -19,13 +19,12 @@ extern "C" {
#endif
#include <stdio.h>
-#include <sys/queue.h>
#include <rte_log.h>
#include <rte_dev.h>
/** Double linked list of buses */
-TAILQ_HEAD(rte_bus_list, rte_bus);
+RTE_TAILQ_HEAD(rte_bus_list, rte_bus);
/**
@@ -250,7 +249,7 @@ typedef enum rte_iova_mode (*rte_bus_get_iommu_class_t)(void);
* A structure describing a generic bus.
*/
struct rte_bus {
- TAILQ_ENTRY(rte_bus) next; /**< Next bus object in linked list */
+ RTE_TAILQ_ENTRY(rte_bus) next; /**< Next bus object in linked list */
const char *name; /**< Name of the bus */
rte_bus_scan_t scan; /**< Scan for devices attached to bus */
rte_bus_probe_t probe; /**< Probe devices on bus */
diff --git a/lib/eal/include/rte_class.h b/lib/eal/include/rte_class.h
index 856d09b22d..d560339652 100644
--- a/lib/eal/include/rte_class.h
+++ b/lib/eal/include/rte_class.h
@@ -22,18 +22,16 @@
extern "C" {
#endif
-#include <sys/queue.h>
-
#include <rte_dev.h>
/** Double linked list of classes */
-TAILQ_HEAD(rte_class_list, rte_class);
+RTE_TAILQ_HEAD(rte_class_list, rte_class);
/**
* A structure describing a generic device class.
*/
struct rte_class {
- TAILQ_ENTRY(rte_class) next; /**< Next device class in linked list */
+ RTE_TAILQ_ENTRY(rte_class) next; /**< Next device class in linked list */
const char *name; /**< Name of the class */
rte_dev_iterate_t dev_iterate; /**< Device iterator. */
};
diff --git a/lib/eal/include/rte_dev.h b/lib/eal/include/rte_dev.h
index 6dd72c11a1..f6efe0c94e 100644
--- a/lib/eal/include/rte_dev.h
+++ b/lib/eal/include/rte_dev.h
@@ -18,7 +18,6 @@ extern "C" {
#endif
#include <stdio.h>
-#include <sys/queue.h>
#include <rte_config.h>
#include <rte_compat.h>
@@ -75,7 +74,7 @@ struct rte_mem_resource {
* A structure describing a device driver.
*/
struct rte_driver {
- TAILQ_ENTRY(rte_driver) next; /**< Next in list. */
+ RTE_TAILQ_ENTRY(rte_driver) next; /**< Next in list. */
const char *name; /**< Driver name. */
const char *alias; /**< Driver alias. */
};
@@ -90,7 +89,7 @@ struct rte_driver {
* A structure describing a generic device.
*/
struct rte_device {
- TAILQ_ENTRY(rte_device) next; /**< Next device */
+ RTE_TAILQ_ENTRY(rte_device) next; /**< Next device */
const char *name; /**< Device name */
const struct rte_driver *driver; /**< Driver assigned after probing */
const struct rte_bus *bus; /**< Bus handle assigned on scan */
diff --git a/lib/eal/include/rte_devargs.h b/lib/eal/include/rte_devargs.h
index cd90944fe8..957477b398 100644
--- a/lib/eal/include/rte_devargs.h
+++ b/lib/eal/include/rte_devargs.h
@@ -21,7 +21,6 @@ extern "C" {
#endif
#include <stdio.h>
-#include <sys/queue.h>
#include <rte_compat.h>
#include <rte_bus.h>
@@ -76,7 +75,7 @@ enum rte_devtype {
*/
struct rte_devargs {
/** Next in list. */
- TAILQ_ENTRY(rte_devargs) next;
+ RTE_TAILQ_ENTRY(rte_devargs) next;
/** Type of device. */
enum rte_devtype type;
/** Device policy. */
diff --git a/lib/eal/include/rte_log.h b/lib/eal/include/rte_log.h
index b706bb8710..bb3523467b 100644
--- a/lib/eal/include/rte_log.h
+++ b/lib/eal/include/rte_log.h
@@ -21,7 +21,6 @@ extern "C" {
#include <stdio.h>
#include <stdarg.h>
#include <stdbool.h>
-#include <sys/queue.h>
#include <rte_common.h>
#include <rte_config.h>
diff --git a/lib/eal/include/rte_service.h b/lib/eal/include/rte_service.h
index c7d037d862..1c9275c32a 100644
--- a/lib/eal/include/rte_service.h
+++ b/lib/eal/include/rte_service.h
@@ -29,7 +29,6 @@ extern "C" {
#include<stdio.h>
#include <stdint.h>
-#include <sys/queue.h>
#include <rte_config.h>
#include <rte_lcore.h>
diff --git a/lib/eal/include/rte_tailq.h b/lib/eal/include/rte_tailq.h
index b6fe4e5f78..e860582cda 100644
--- a/lib/eal/include/rte_tailq.h
+++ b/lib/eal/include/rte_tailq.h
@@ -15,17 +15,16 @@
extern "C" {
#endif
-#include <sys/queue.h>
#include <stdio.h>
#include <rte_debug.h>
/** dummy structure type used by the rte_tailq APIs */
struct rte_tailq_entry {
- TAILQ_ENTRY(rte_tailq_entry) next; /**< Pointer entries for a tailq list */
+ RTE_TAILQ_ENTRY(rte_tailq_entry) next; /**< Pointer entries for a tailq list */
void *data; /**< Pointer to the data referenced by this tailq entry */
};
/** dummy */
-TAILQ_HEAD(rte_tailq_entry_head, rte_tailq_entry);
+RTE_TAILQ_HEAD(rte_tailq_entry_head, rte_tailq_entry);
#define RTE_TAILQ_NAMESIZE 32
@@ -48,7 +47,7 @@ struct rte_tailq_elem {
* rte_eal_tailqs_init()
*/
struct rte_tailq_head *head;
- TAILQ_ENTRY(rte_tailq_elem) next;
+ RTE_TAILQ_ENTRY(rte_tailq_elem) next;
const char name[RTE_TAILQ_NAMESIZE];
};
@@ -126,10 +125,10 @@ RTE_INIT(tailqinitfn_ ##t) \
}
/* This macro permits both remove and free var within the loop safely.*/
-#ifndef TAILQ_FOREACH_SAFE
-#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \
- for ((var) = TAILQ_FIRST((head)); \
- (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+#ifndef RTE_TAILQ_FOREACH_SAFE
+#define RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+ for ((var) = RTE_TAILQ_FIRST((head)); \
+ (var) && ((tvar) = RTE_TAILQ_NEXT((var), field), 1); \
(var) = (tvar))
#endif
diff --git a/lib/eal/linux/include/rte_os.h b/lib/eal/linux/include/rte_os.h
index 1618b4df22..35c07c70cb 100644
--- a/lib/eal/linux/include/rte_os.h
+++ b/lib/eal/linux/include/rte_os.h
@@ -11,6 +11,16 @@
*/
#include <sched.h>
+#include <sys/queue.h>
+
+/* These macros are compatible with system's sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
+#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
+#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
+#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
+#define RTE_TAILQ_NEXT(elem, field) TAILQ_NEXT(elem, field)
+#define RTE_STAILQ_HEAD(name, type) STAILQ_HEAD(name, type)
+#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)
#ifdef CPU_SETSIZE /* may require _GNU_SOURCE */
typedef cpu_set_t rte_cpuset_t;
diff --git a/lib/eal/windows/eal_alarm.c b/lib/eal/windows/eal_alarm.c
index e5dc54efb8..103c1f909d 100644
--- a/lib/eal/windows/eal_alarm.c
+++ b/lib/eal/windows/eal_alarm.c
@@ -4,6 +4,7 @@
#include <stdatomic.h>
#include <stdbool.h>
+#include <sys/queue.h>
#include <rte_alarm.h>
#include <rte_spinlock.h>
diff --git a/lib/eal/windows/include/rte_os.h b/lib/eal/windows/include/rte_os.h
index 66c711d458..a0a311495e 100644
--- a/lib/eal/windows/include/rte_os.h
+++ b/lib/eal/windows/include/rte_os.h
@@ -18,6 +18,33 @@
extern "C" {
#endif
+/* These macros are compatible with bundled sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) \
+struct name { \
+ struct type *tqh_first; \
+ struct type **tqh_last; \
+}
+#define RTE_TAILQ_ENTRY(type) \
+struct { \
+ struct type *tqe_next; \
+ struct type **tqe_prev; \
+}
+#define RTE_TAILQ_FOREACH(var, head, field) \
+ for ((var) = RTE_TAILQ_FIRST((head)); \
+ (var); \
+ (var) = RTE_TAILQ_NEXT((var), field))
+#define RTE_TAILQ_FIRST(head) ((head)->tqh_first)
+#define RTE_TAILQ_NEXT(elm, field) ((elm)->field.tqe_next)
+#define RTE_STAILQ_HEAD(name, type) \
+struct name { \
+ struct type *stqh_first; \
+ struct type **stqh_last; \
+}
+#define RTE_STAILQ_ENTRY(type) \
+struct { \
+ struct type *stqe_next; \
+}
+
/* cpu_set macros implementation */
#define RTE_CPU_AND(dst, src1, src2) CPU_AND(dst, src1, src2)
#define RTE_CPU_OR(dst, src1, src2) CPU_OR(dst, src1, src2)
diff --git a/lib/efd/rte_efd.c b/lib/efd/rte_efd.c
index 77f46809f8..5bf517fee9 100644
--- a/lib/efd/rte_efd.c
+++ b/lib/efd/rte_efd.c
@@ -759,7 +759,7 @@ rte_efd_free(struct rte_efd_table *table)
efd_list = RTE_TAILQ_CAST(rte_efd_tailq.head, rte_efd_list);
rte_mcfg_tailq_write_lock();
- TAILQ_FOREACH_SAFE(te, efd_list, next, temp) {
+ RTE_TAILQ_FOREACH_SAFE(te, efd_list, next, temp) {
if (te->data == (void *) table) {
TAILQ_REMOVE(efd_list, te, next);
rte_free(te);
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..d2c9ec42c7 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -21,7 +21,7 @@
struct rte_eth_dev_callback;
/** @internal Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
+RTE_TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
struct rte_eth_dev;
diff --git a/lib/hash/rte_fbk_hash.h b/lib/hash/rte_fbk_hash.h
index c4d6976d2b..9c3a61c1d6 100644
--- a/lib/hash/rte_fbk_hash.h
+++ b/lib/hash/rte_fbk_hash.h
@@ -17,7 +17,6 @@
#include <stdint.h>
#include <errno.h>
-#include <sys/queue.h>
#ifdef __cplusplus
extern "C" {
diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c
index d5a95a6e00..696a1121e2 100644
--- a/lib/hash/rte_thash.c
+++ b/lib/hash/rte_thash.c
@@ -2,6 +2,8 @@
* Copyright(c) 2021 Intel Corporation
*/
+#include <sys/queue.h>
+
#include <rte_thash.h>
#include <rte_tailq.h>
#include <rte_random.h>
diff --git a/lib/ip_frag/rte_ip_frag.h b/lib/ip_frag/rte_ip_frag.h
index 0bfe64b14e..80f931c32a 100644
--- a/lib/ip_frag/rte_ip_frag.h
+++ b/lib/ip_frag/rte_ip_frag.h
@@ -62,7 +62,7 @@ struct ip_frag_key {
* First two entries in the frags[] array are for the last and first fragments.
*/
struct ip_frag_pkt {
- TAILQ_ENTRY(ip_frag_pkt) lru; /**< LRU list */
+ RTE_TAILQ_ENTRY(ip_frag_pkt) lru; /**< LRU list */
struct ip_frag_key key; /**< fragmentation key */
uint64_t start; /**< creation timestamp */
uint32_t total_size; /**< expected reassembled size */
@@ -83,7 +83,7 @@ struct rte_ip_frag_death_row {
/**< mbufs to be freed */
};
-TAILQ_HEAD(ip_pkt_list, ip_frag_pkt); /**< @internal fragments tailq */
+RTE_TAILQ_HEAD(ip_pkt_list, ip_frag_pkt); /**< @internal fragments tailq */
/** fragmentation table statistics */
struct ip_frag_tbl_stat {
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 59a588425b..c5f859ae71 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -1337,7 +1337,7 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *),
rte_mcfg_mempool_read_lock();
- TAILQ_FOREACH_SAFE(te, mempool_list, next, tmp_te) {
+ RTE_TAILQ_FOREACH_SAFE(te, mempool_list, next, tmp_te) {
(*func)((struct rte_mempool *) te->data, arg);
}
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 4235d6f0bf..f57ecbd6fc 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -38,7 +38,6 @@
#include <stdint.h>
#include <errno.h>
#include <inttypes.h>
-#include <sys/queue.h>
#include <rte_config.h>
#include <rte_spinlock.h>
@@ -141,7 +140,7 @@ struct rte_mempool_objsz {
* double-frees.
*/
struct rte_mempool_objhdr {
- STAILQ_ENTRY(rte_mempool_objhdr) next; /**< Next in list. */
+ RTE_STAILQ_ENTRY(rte_mempool_objhdr) next; /**< Next in list. */
struct rte_mempool *mp; /**< The mempool owning the object. */
rte_iova_t iova; /**< IO address of the object. */
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
@@ -152,7 +151,7 @@ struct rte_mempool_objhdr {
/**
* A list of object headers type
*/
-STAILQ_HEAD(rte_mempool_objhdr_list, rte_mempool_objhdr);
+RTE_STAILQ_HEAD(rte_mempool_objhdr_list, rte_mempool_objhdr);
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
@@ -171,7 +170,7 @@ struct rte_mempool_objtlr {
/**
* A list of memory where objects are stored
*/
-STAILQ_HEAD(rte_mempool_memhdr_list, rte_mempool_memhdr);
+RTE_STAILQ_HEAD(rte_mempool_memhdr_list, rte_mempool_memhdr);
/**
* Callback used to free a memory chunk
@@ -186,7 +185,7 @@ typedef void (rte_mempool_memchunk_free_cb_t)(struct rte_mempool_memhdr *memhdr,
* and physically contiguous.
*/
struct rte_mempool_memhdr {
- STAILQ_ENTRY(rte_mempool_memhdr) next; /**< Next in list. */
+ RTE_STAILQ_ENTRY(rte_mempool_memhdr) next; /**< Next in list. */
struct rte_mempool *mp; /**< The mempool owning the chunk */
void *addr; /**< Virtual address of the chunk */
rte_iova_t iova; /**< IO address of the chunk */
diff --git a/lib/pci/rte_pci.h b/lib/pci/rte_pci.h
index 1f33d687f4..71cbd441c7 100644
--- a/lib/pci/rte_pci.h
+++ b/lib/pci/rte_pci.h
@@ -18,7 +18,6 @@ extern "C" {
#include <stdio.h>
#include <limits.h>
-#include <sys/queue.h>
#include <inttypes.h>
#include <sys/types.h>
diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h
index 16718ca7f1..43ce1a29d4 100644
--- a/lib/ring/rte_ring_core.h
+++ b/lib/ring/rte_ring_core.h
@@ -26,7 +26,6 @@ extern "C" {
#include <stdio.h>
#include <stdint.h>
#include <string.h>
-#include <sys/queue.h>
#include <errno.h>
#include <rte_common.h>
#include <rte_config.h>
diff --git a/lib/table/rte_swx_table.h b/lib/table/rte_swx_table.h
index e23f2304c6..f93e5f3f95 100644
--- a/lib/table/rte_swx_table.h
+++ b/lib/table/rte_swx_table.h
@@ -16,7 +16,8 @@ extern "C" {
*/
#include <stdint.h>
-#include <sys/queue.h>
+
+#include <rte_os.h>
/** Match type. */
enum rte_swx_table_match_type {
@@ -68,7 +69,7 @@ struct rte_swx_table_entry {
/** Used to facilitate the membership of this table entry to a
* linked list.
*/
- TAILQ_ENTRY(rte_swx_table_entry) node;
+ RTE_TAILQ_ENTRY(rte_swx_table_entry) node;
/** Key value for the current entry. Array of *key_size* bytes or NULL
* if the *key_size* for the current table is 0.
@@ -111,7 +112,7 @@ struct rte_swx_table_entry {
};
/** List of table entries. */
-TAILQ_HEAD(rte_swx_table_entry_list, rte_swx_table_entry);
+RTE_TAILQ_HEAD(rte_swx_table_entry_list, rte_swx_table_entry);
/**
* Table memory footprint get
diff --git a/lib/table/rte_swx_table_selector.h b/lib/table/rte_swx_table_selector.h
index 71b6a74810..62988d2856 100644
--- a/lib/table/rte_swx_table_selector.h
+++ b/lib/table/rte_swx_table_selector.h
@@ -16,7 +16,6 @@ extern "C" {
*/
#include <stdint.h>
-#include <sys/queue.h>
#include <rte_compat.h>
@@ -56,7 +55,7 @@ struct rte_swx_table_selector_params {
/** Group member parameters. */
struct rte_swx_table_selector_member {
/** Linked list connectivity. */
- TAILQ_ENTRY(rte_swx_table_selector_member) node;
+ RTE_TAILQ_ENTRY(rte_swx_table_selector_member) node;
/** Member ID. */
uint32_t member_id;
@@ -66,7 +65,7 @@ struct rte_swx_table_selector_member {
};
/** List of group members. */
-TAILQ_HEAD(rte_swx_table_selector_member_list, rte_swx_table_selector_member);
+RTE_TAILQ_HEAD(rte_swx_table_selector_member_list, rte_swx_table_selector_member);
/** Group parameters. */
struct rte_swx_table_selector_group {
diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
index e0b67721b6..e4a445e709 100644
--- a/lib/vhost/iotlb.c
+++ b/lib/vhost/iotlb.c
@@ -32,7 +32,7 @@ vhost_user_iotlb_pending_remove_all(struct vhost_virtqueue *vq)
rte_rwlock_write_lock(&vq->iotlb_pending_lock);
- TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
+ RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
TAILQ_REMOVE(&vq->iotlb_pending_list, node, next);
rte_mempool_put(vq->iotlb_pool, node);
}
@@ -100,7 +100,8 @@ vhost_user_iotlb_pending_remove(struct vhost_virtqueue *vq,
rte_rwlock_write_lock(&vq->iotlb_pending_lock);
- TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
+ RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next,
+ temp_node) {
if (node->iova < iova)
continue;
if (node->iova >= iova + size)
@@ -121,7 +122,7 @@ vhost_user_iotlb_cache_remove_all(struct vhost_virtqueue *vq)
rte_rwlock_write_lock(&vq->iotlb_lock);
- TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+ RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
TAILQ_REMOVE(&vq->iotlb_list, node, next);
rte_mempool_put(vq->iotlb_pool, node);
}
@@ -141,7 +142,7 @@ vhost_user_iotlb_cache_random_evict(struct vhost_virtqueue *vq)
entry_idx = rte_rand() % vq->iotlb_cache_nr;
- TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+ RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
if (!entry_idx) {
TAILQ_REMOVE(&vq->iotlb_list, node, next);
rte_mempool_put(vq->iotlb_pool, node);
@@ -218,7 +219,7 @@ vhost_user_iotlb_cache_remove(struct vhost_virtqueue *vq,
rte_rwlock_write_lock(&vq->iotlb_lock);
- TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+ RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
/* Sorted list */
if (unlikely(iova + size < node->iova))
break;
diff --git a/lib/vhost/rte_vdpa_dev.h b/lib/vhost/rte_vdpa_dev.h
index bfada387b0..b0f494815f 100644
--- a/lib/vhost/rte_vdpa_dev.h
+++ b/lib/vhost/rte_vdpa_dev.h
@@ -71,7 +71,7 @@ struct rte_vdpa_dev_ops {
* vdpa device structure includes device address and device operations.
*/
struct rte_vdpa_device {
- TAILQ_ENTRY(rte_vdpa_device) next;
+ RTE_TAILQ_ENTRY(rte_vdpa_device) next;
/** Generic device information */
struct rte_device *device;
/** vdpa device operations */
diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c
index 99a926a772..6dd91859ac 100644
--- a/lib/vhost/vdpa.c
+++ b/lib/vhost/vdpa.c
@@ -115,7 +115,7 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev)
int ret = -1;
rte_spinlock_lock(&vdpa_device_list_lock);
- TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) {
+ RTE_TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) {
if (dev != cur_dev)
continue;
--
2.30.2
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH] doc: abstract the behaviour of rte_ctrl_thread_create
@ 2021-08-23 9:40 3% ` Olivier Matz
2021-08-23 21:18 0% ` Honnappa Nagarahalli
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2021-08-23 9:40 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: thomas, dev, lucp.at.work, david.marchand, Ruifeng Wang, nd
Hi Honnappa,
Back from holidays, sorry for the late answer.
On Mon, Aug 09, 2021 at 01:18:42PM +0000, Honnappa Nagarahalli wrote:
> <snip>
> >
> > 30/07/2021 23:44, Honnappa Nagarahalli:
> > > The current expected behaviour of the function rte_ctrl_thread_create
> > > is rigid which makes the implementation of the function complex.
> > > Make the expected behaviour abstract to allow for simplified
> > > implementation.
> > >
> > > With this change, the calls to pthread_setaffinity_np can be moved to
> > > the control thread. This will avoid the use of pthread_barrier_wait
> > > and simplify the synchronization mechanism between
> > > rte_ctrl_thread_create and the calling thread.
> > >
> > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > ---
> > > +* eal: The expected behaviour of the function
> > > +``rte_ctrl_thread_create``
> > > + abstracted to allow for simplified implementation. The new
> > > +behaviour is
> > > + as follows:
> > > + Creates a control thread with the given name. The affinity of the
> > > +new
> > > + thread is based on the CPU affinity retrieved at the time
> > > +rte_eal_init()
> > > + was called, the dataplane and service lcores are then excluded.
> >
> > I don't understand what is different of the current API:
> > * Wrapper to pthread_create(), pthread_setname_np() and
> > * pthread_setaffinity_np(). The affinity of the new thread is based
> > * on the CPU affinity retrieved at the time rte_eal_init() was called,
> > * the dataplane and service lcores are then excluded.
> My concern is for the word "Wrapper". I am not sure how much we are bound by that to keep the code as a "wrapper".
> The new patch does not change the high level behavior.
I am ok to remove the word "wrapper" from the description, and I agree
it can be better described without quoting the pthread_* functions.
> Are you saying you are ok with the patch without the deprecation notice?
I don't think it requires a deprecation notice if the API and ABI is
left unchanged. To be honnest, I find a bit hard to understand what is
really changed by reading the deprecation notice:
> +* eal: The expected behaviour of the function ``rte_ctrl_thread_create``
> + abstracted to allow for simplified implementation. The new behaviour is
> + as follows:
> + Creates a control thread with the given name. The affinity of the new
> + thread is based on the CPU affinity retrieved at the time rte_eal_init()
> + was called, the dataplane and service lcores are then excluded.
I'll send my comments to your patch:
http://patches.dpdk.org/project/dpdk/patch/20210802051652.3611-1-honnappa.nagarahalli@arm.com/
Thanks,
Olivier
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH 21.11 v2 0/3] octeontx build only on 64-bit Linux
@ 2021-08-21 14:07 0% Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 200+ results
From: Pavan Nikhilesh Bhagavatula @ 2021-08-21 14:07 UTC (permalink / raw)
To: David Marchand; +Cc: dev, Thomas Monjalon, Jerin Jacob Kollanukkaran
>On Thu, Mar 25, 2021 at 3:52 PM Thomas Monjalon
><thomas@monjalon.net> wrote:
>>
>> This is a reorg of the patches from Pavan.
>> It has been discussed that it should wait for DPDK 21.11
>> for ABI compatibility reason.
>>
>> Pavan Nikhilesh (3):
>> net/thunderx: enable build only on 64-bit Linux
>> common/octeontx: enable build only on 64-bit Linux
>> common/octeontx2: enable build only on 64-bit Linux
>>
>> drivers/common/octeontx/meson.build | 6 ++++++
>> drivers/common/octeontx2/meson.build | 4 ++--
>> drivers/compress/octeontx/meson.build | 6 ++++++
>> drivers/crypto/octeontx/meson.build | 7 +++++--
>> drivers/event/octeontx/meson.build | 6 ++++++
>> drivers/event/octeontx2/meson.build | 4 ++--
>> drivers/mempool/octeontx/meson.build | 5 +++--
>> drivers/mempool/octeontx2/meson.build | 9 ++-------
>> drivers/net/octeontx/meson.build | 4 ++--
>> drivers/net/octeontx2/meson.build | 10 ++--------
>> drivers/net/thunderx/meson.build | 4 ++--
>> drivers/raw/octeontx2_dma/meson.build | 10 ++++++----
>> 12 files changed, 44 insertions(+), 31 deletions(-)
>
>There were a couple of cleanups (indent etc..) and changes in meson
>files.
>This series does not apply cleanly on the main branch.
>Could you rebase it?
Ack l will rebase it.
>
>I noticed that the net/cnxk driver does not have this check, but it is
>disabled anyway since it depends on common/cnxk.
>Is it worth adding the check for consistency?
Sure I will add checks to cnxk driver family.
Thanks,
Pavan.
>
>
>--
>David Marchand
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [RFC 1/7] eth: move ethdev 'burst' API into separate structure
2021-08-20 16:28 3% [dpdk-dev] [RFC 0/7] hide eth dev related structures Konstantin Ananyev
@ 2021-08-20 16:28 2% ` Konstantin Ananyev
2021-08-26 12:37 3% ` [dpdk-dev] [RFC 0/7] hide eth dev related structures Jerin Jacob
1 sibling, 0 replies; 200+ results
From: Konstantin Ananyev @ 2021-08-20 16:28 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, andrew.rybchenko, qiming.yang, qi.z.zhang,
beilei.xing, techboard, Konstantin Ananyev
Move public function pointers (rx_pkt_burst(), etc.) from rte_eth_dev
into a separate flat array. We can keep it public to still use inline
functions for 'fast' calls (like rte_eth_rx_burst(), etc.) to
avoid/minimize slowdown.
The intention is to make rte_eth_dev and related structures internal.
That should allow future possible changes to core eth_dev strcutures
to be transaprent to the user and help to avoid ABI/API breakages.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/ethdev/ethdev_private.c | 74 ++++++++++++++++++++++++++++++++++++
lib/ethdev/ethdev_private.h | 3 ++
lib/ethdev/rte_ethdev.c | 12 ++++++
lib/ethdev/rte_ethdev_core.h | 33 ++++++++++++++++
4 files changed, 122 insertions(+)
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 012cf73ca2..1ab64d24cf 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -174,3 +174,77 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
return str == NULL ? -1 : 0;
}
+
+static uint16_t
+dummy_eth_rx_burst(__rte_unused uint16_t port_id,
+ __rte_unused uint16_t queue_id,
+ __rte_unused struct rte_mbuf **rx_pkts,
+ __rte_unused uint16_t nb_pkts)
+{
+ RTE_LOG(ERR, EAL, "rx_pkt_burst for unconfigured port %u\n", port_id);
+ rte_errno = ENOTSUP;
+ return 0;
+}
+
+static uint16_t
+dummy_eth_tx_burst(__rte_unused uint16_t port_id,
+ __rte_unused uint16_t queue_id,
+ __rte_unused struct rte_mbuf **tx_pkts,
+ __rte_unused uint16_t nb_pkts)
+{
+ RTE_LOG(ERR, EAL, "tx_pkt_burst for unconfigured port %u\n", port_id);
+ rte_errno = ENOTSUP;
+ return 0;
+}
+
+static uint16_t
+dummy_eth_tx_prepare(__rte_unused uint16_t port_id,
+ __rte_unused uint16_t queue_id,
+ __rte_unused struct rte_mbuf **tx_pkts,
+ __rte_unused uint16_t nb_pkts)
+{
+ RTE_LOG(ERR, EAL, "tx_pkt_prepare for unconfigured port %u\n", port_id);
+ rte_errno = ENOTSUP;
+ return 0;
+}
+
+static int
+dummy_eth_rx_queue_count(__rte_unused uint16_t port_id,
+ __rte_unused uint16_t queue_id)
+{
+ RTE_LOG(ERR, EAL, "rx_queue_count for unconfigured port %u\n", port_id);
+ return -ENOTSUP;
+}
+
+static int
+dummy_eth_rx_descriptor_status(__rte_unused uint16_t port_id,
+ __rte_unused uint16_t queue_id, __rte_unused uint16_t offset)
+{
+ RTE_LOG(ERR, EAL, "rx_descriptor_status for unconfigured port %u\n",
+ port_id);
+ return -ENOTSUP;
+}
+
+static int
+dummy_eth_tx_descriptor_status(__rte_unused uint16_t port_id,
+ __rte_unused uint16_t queue_id, __rte_unused uint16_t offset)
+{
+ RTE_LOG(ERR, EAL, "tx_descriptor_status for unconfigured port %u\n",
+ port_id);
+ return -ENOTSUP;
+}
+
+void
+rte_eth_dev_burst_api_reset(struct rte_eth_burst_api *rba)
+{
+ static const struct rte_eth_burst_api dummy = {
+ .rx_pkt_burst = dummy_eth_rx_burst,
+ .tx_pkt_burst = dummy_eth_tx_burst,
+ .tx_pkt_prepare = dummy_eth_tx_prepare,
+ .rx_queue_count = dummy_eth_rx_queue_count,
+ .rx_descriptor_status = dummy_eth_rx_descriptor_status,
+ .tx_descriptor_status = dummy_eth_tx_descriptor_status,
+ };
+
+ *rba = dummy;
+}
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index 9bb0879538..b9b0e6755a 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -30,6 +30,9 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
/* Parse devargs value for representor parameter. */
int rte_eth_devargs_parse_representor_ports(char *str, void *data);
+/* reset eth 'burst' API to dummy values */
+void rte_eth_dev_burst_api_reset(struct rte_eth_burst_api *rba);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 9d95cd11e1..949292a617 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -44,6 +44,9 @@
static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
+/* public 'fast/burst' API */
+struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
+
/* spinlock for eth device callbacks */
static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
@@ -1336,6 +1339,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
int diag;
int ret;
uint16_t old_mtu;
+ struct rte_eth_burst_api rba;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
@@ -1363,6 +1367,9 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
*/
dev->data->dev_configured = 0;
+ rba = rte_eth_burst_api[port_id];
+ rte_eth_dev_burst_api_reset(&rte_eth_burst_api[port_id]);
+
/* Store original config, as rollback required on failure */
memcpy(&orig_conf, &dev->data->dev_conf, sizeof(dev->data->dev_conf));
@@ -1623,6 +1630,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
if (old_mtu != dev->data->mtu)
dev->data->mtu = old_mtu;
+ rte_eth_burst_api[port_id] = rba;
+
rte_ethdev_trace_configure(port_id, nb_rx_q, nb_tx_q, dev_conf, ret);
return ret;
}
@@ -1871,6 +1880,7 @@ rte_eth_dev_close(uint16_t port_id)
dev = &rte_eth_devices[port_id];
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP);
+ rte_eth_dev_burst_api_reset(rte_eth_burst_api + port_id);
*lasterr = (*dev->dev_ops->dev_close)(dev);
if (*lasterr != 0)
lasterr = &binerr;
@@ -1892,6 +1902,8 @@ rte_eth_dev_reset(uint16_t port_id)
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_reset, -ENOTSUP);
+ rte_eth_dev_burst_api_reset(rte_eth_burst_api + port_id);
+
ret = rte_eth_dev_stop(port_id);
if (ret != 0) {
RTE_ETHDEV_LOG(ERR,
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..fb8526cb9f 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -25,21 +25,31 @@ TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
struct rte_eth_dev;
+typedef uint16_t (*rte_eth_rx_burst_t)(uint16_t port_id, uint16_t queue_id,
+ struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
typedef uint16_t (*eth_rx_burst_t)(void *rxq,
struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
/**< @internal Retrieve input packets from a receive queue of an Ethernet device. */
+typedef uint16_t (*rte_eth_tx_burst_t)(uint16_t port_id, uint16_t queue_id,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
typedef uint16_t (*eth_tx_burst_t)(void *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
/**< @internal Send output packets on a transmit queue of an Ethernet device. */
+typedef uint16_t (*rte_eth_tx_prep_t)(uint16_t port_id, uint16_t queue_id,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
typedef uint16_t (*eth_tx_prep_t)(void *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
/**< @internal Prepare output packets on a transmit queue of an Ethernet device. */
+typedef int (*rte_eth_rx_queue_count_t)(uint16_t port_id, uint16_t queue_id);
typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
uint16_t rx_queue_id);
@@ -48,12 +58,35 @@ typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
/**< @internal Check DD bit of specific RX descriptor */
+typedef int (*rte_eth_rx_descriptor_status_t)(uint16_t port_id,
+ uint16_t queue_id, uint16_t offset);
+
typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
/**< @internal Check the status of a Rx descriptor */
+typedef int (*rte_eth_tx_descriptor_status_t)(uint16_t port_id,
+ uint16_t queue_id, uint16_t offset);
+
typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
/**< @internal Check the status of a Tx descriptor */
+struct rte_eth_burst_api {
+ rte_eth_rx_burst_t rx_pkt_burst;
+ /**< PMD receive function. */
+ rte_eth_tx_burst_t tx_pkt_burst;
+ /**< PMD transmit function. */
+ rte_eth_tx_prep_t tx_pkt_prepare;
+ /**< PMD transmit prepare function. */
+ rte_eth_rx_queue_count_t rx_queue_count;
+ /**< Get the number of used RX descriptors. */
+ rte_eth_rx_descriptor_status_t rx_descriptor_status;
+ /**< Check the status of a Rx descriptor. */
+ rte_eth_tx_descriptor_status_t tx_descriptor_status;
+ /**< Check the status of a Tx descriptor. */
+ uintptr_t reserved[2];
+} __rte_cache_min_aligned;
+
+extern struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
/**
* @internal
--
2.26.3
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [RFC 0/7] hide eth dev related structures
@ 2021-08-20 16:28 3% Konstantin Ananyev
2021-08-20 16:28 2% ` [dpdk-dev] [RFC 1/7] eth: move ethdev 'burst' API into separate structure Konstantin Ananyev
2021-08-26 12:37 3% ` [dpdk-dev] [RFC 0/7] hide eth dev related structures Jerin Jacob
0 siblings, 2 replies; 200+ results
From: Konstantin Ananyev @ 2021-08-20 16:28 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, andrew.rybchenko, qiming.yang, qi.z.zhang,
beilei.xing, techboard, Konstantin Ananyev
NOTE: This is just an RFC to start further discussion and collect the feedback.
Due to significant amount of work, changes required are applied only to two
PMDs so far: net/i40e and net/ice.
So to build it you'll need to add:
-Denable_drivers='common/*,mempool/*,net/ice,net/i40e'
to your config options.
The aim of these patch series is to make rte_ethdev core data structures
(rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to DPDK
and not visible to the user.
That should allow future possible changes to core ethdev related structures
to be transparent to the user and help to improve ABI/API stability.
Note that current ethdev API is preserved, though it is an ABI break for sure.
The work is based on previous discussion at:
https://www.mail-archive.com/dev@dpdk.org/msg211405.html
and consists of the following main points:
1. Move public 'fast' function pointers (rx_pkt_burst(), etc.) from
rte_eth_dev into a separate flat array. We keep it public to still be
able to use inline functions for these 'fast' calls
(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
2. Change prototype within PMDs for these 'fast' functions
(pkt_rx_burst(), etc.) to accept pair of <port_id, queue_id>
instead of queue pointer.
3. Also some mechanical changes in function start/finish code is required.
Basically to avoid extra level of indirection - PMD required to do some
preliminary checks and data retrieval that are currently done at user level
by inline rte_eth* functions.
4. Special _rte_eth_*_prolog(/epilog) inline functions and helper macros
are provided to make these changes inside PMDs as straightforward
as possible.
5. Change implementation of 'fast' ethdev functions (rte_eth_rx_burst(), etc.)
to use new public flat array.
6. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related things
into internal header: <ethdev_driver.h>.
That approach was selected to avoid(/minimize) possible performance losses.
So far I done only limited amount functional and performance testing.
Didn't spot any functional problems, and performance numbers
remains the same before and after the patch on my box (testpmd, macswap fwd).
Remaining items:
==============
- implement required changes for all PMD at drivers/net.
So far I done changes only for two drivers, and definitely would use some
help from other PMD maintainers. Required changes are mechanical,
but we have a lot of drivers these days.
- <rte_bus_pci.h> contains reference to rte_eth_dev field
RTE_ETH_DEV_TO_PCI(eth_dev).
Need to move this macro into some internal header.
- Extra testing
- checkpatch warnings
- docs update
Konstantin Ananyev (7):
eth: move ethdev 'burst' API into separate structure
eth: make drivers to use new API for Rx
eth: make drivers to use new API for Tx
eth: make drivers to use new API for Tx prepare
eth: make drivers to use new API to obtain descriptor status
eth: make drivers to use new API for Rx queue count
eth: hide eth dev related structures
app/test-pmd/config.c | 23 +-
app/test/virtual_pmd.c | 27 +-
drivers/common/octeontx2/otx2_sec_idev.c | 2 +-
drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 15 +-
drivers/net/i40e/i40e_ethdev_vf.c | 15 +-
drivers/net/i40e/i40e_rxtx.c | 243 ++++---
drivers/net/i40e/i40e_rxtx.h | 68 +-
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 11 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 12 +-
drivers/net/i40e/i40e_rxtx_vec_sse.c | 8 +-
drivers/net/i40e/i40e_vf_representor.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 10 +-
drivers/net/ice/ice_dcf_vf_representor.c | 10 +-
drivers/net/ice/ice_ethdev.c | 15 +-
drivers/net/ice/ice_rxtx.c | 236 ++++---
drivers/net/ice/ice_rxtx.h | 73 +--
drivers/net/ice/ice_rxtx_vec_avx2.c | 24 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 24 +-
drivers/net/ice/ice_rxtx_vec_common.h | 7 +-
drivers/net/ice/ice_rxtx_vec_sse.c | 12 +-
lib/ethdev/ethdev_driver.h | 601 ++++++++++++++++++
lib/ethdev/ethdev_private.c | 74 +++
lib/ethdev/ethdev_private.h | 3 +
lib/ethdev/rte_ethdev.c | 176 ++++-
lib/ethdev/rte_ethdev.h | 194 ++----
lib/ethdev/rte_ethdev_core.h | 182 ++----
lib/ethdev/version.map | 16 +
lib/eventdev/rte_event_eth_rx_adapter.c | 2 +-
lib/eventdev/rte_event_eth_tx_adapter.c | 2 +-
lib/eventdev/rte_eventdev.c | 2 +-
31 files changed, 1488 insertions(+), 611 deletions(-)
--
2.26.3
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v3] ethdev: fix representor port ID search by name
@ 2021-08-20 12:18 3% ` Andrew Rybchenko
2021-08-31 15:41 0% ` Andrew Rybchenko
2021-08-31 16:06 3% ` [dpdk-dev] [PATCH v4] " Andrew Rybchenko
2 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-08-20 12:18 UTC (permalink / raw)
To: Ajit Khaparde, Somnath Kotur, John Daley, Hyong Youb Kim,
Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang, Matan Azrad,
Shahaf Shuler, Viacheslav Ovsiienko, Thomas Monjalon,
Ferruh Yigit
Cc: dev, Viacheslav Galaktionov
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Getting a list of representors from a representor does not make sense.
Instead, a parent device should be used.
To this end, extend the rte_eth_dev_data structure to include the port ID
of the parent device for representors.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
The new field is added into the hole in rte_eth_dev_data structure.
The patch does not change ABI, but extra care is required since ABI
check is disabled for the structure because of the libabigail bug [1].
Potentially it is bad for out-of-tree drivers which implement
representors but do not fill in a new parert_port_id field in
rte_eth_dev_data structure. Do we care?
May be the patch should add lines to release notes, but I'd like
to get initial feedback first.
mlx5 changes should be reviwed by maintainers very carefully, since
we are not sure if we patch it correctly.
[1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
v3:
- fix mlx5 build breakage
v2:
- fix mlx5 review notes
- try device port ID first before parent in order to address
backward compatibility issue
drivers/net/bnxt/bnxt_reps.c | 1 +
drivers/net/enic/enic_vf_representor.c | 1 +
drivers/net/i40e/i40e_vf_representor.c | 1 +
drivers/net/ice/ice_dcf_vf_representor.c | 1 +
drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
drivers/net/mlx5/linux/mlx5_os.c | 17 +++++++++++++++++
drivers/net/mlx5/windows/mlx5_os.c | 17 +++++++++++++++++
lib/ethdev/ethdev_driver.h | 6 +++---
lib/ethdev/rte_class_eth.c | 22 ++++++++++++++++++++--
lib/ethdev/rte_ethdev.c | 8 ++++----
lib/ethdev/rte_ethdev_core.h | 4 ++++
11 files changed, 70 insertions(+), 9 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index bdbad53b7d..902591cd39 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
eth_dev->data->representor_id = rep_params->vf_id;
+ eth_dev->data->parent_port_id = rep_params->parent_dev->data->port_id;
rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
index 79dd6e5640..6ee7967ce9 100644
--- a/drivers/net/enic/enic_vf_representor.c
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -662,6 +662,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
eth_dev->data->representor_id = vf->vf_id;
+ eth_dev->data->parent_port_id = pf->port_id;
eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
sizeof(struct rte_ether_addr) *
ENIC_UNICAST_PERFECT_FILTERS, 0);
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 0481b55381..865b637585 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
ethdev->data->representor_id = representor->vf_id;
+ ethdev->data->parent_port_id = pf->dev_data->parent_port_id;
/* Setting the number queues allocated to the VF */
ethdev->data->nb_rx_queues = vf->vsi->nb_qps;
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 970461f3e9..c7cd3fd290 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -418,6 +418,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
vf_rep_eth_dev->data->representor_id = repr->vf_id;
+ vf_rep_eth_dev->data->parent_port_id = repr->dcf_eth_dev->data->port_id;
vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index d5b636a194..7a2063849e 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
ethdev->data->representor_id = representor->vf_id;
+ ethdev->data->parent_port_id = representor->pf_ethdev->data->port_id;
/* Set representor device ops */
ethdev->dev_ops = &ixgbe_vf_representor_dev_ops;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 5f8766aa48..66d851a97d 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1677,6 +1677,23 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
if (priv->representor) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
eth_dev->data->representor_id = priv->representor_id;
+ MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
+ struct mlx5_priv *opriv =
+ rte_eth_devices[port_id].data->dev_private;
+ if (opriv &&
+ opriv->master &&
+ opriv->domain_id == priv->domain_id &&
+ opriv->sh == priv->sh) {
+ eth_dev->data->parent_port_id =
+ rte_eth_devices[port_id].data->port_id;
+ break;
+ }
+ }
+ if (port_id >= RTE_MAX_ETHPORTS) {
+ DRV_LOG(ERR, "no master device for representor");
+ err = ENODEV;
+ goto error;
+ }
}
priv->mp_id.port_id = eth_dev->data->port_id;
strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN);
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 7e1df1c751..5c72c89b5a 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -543,6 +543,23 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
if (priv->representor) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
eth_dev->data->representor_id = priv->representor_id;
+ MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
+ struct mlx5_priv *opriv =
+ rte_eth_devices[port_id].data->dev_private;
+ if (opriv &&
+ opriv->master &&
+ opriv->domain_id == priv->domain_id &&
+ opriv->sh == priv->sh) {
+ eth_dev->data->parent_port_id =
+ rte_eth_devices[port_id].data->port_id;
+ break;
+ }
+ }
+ if (port_id >= RTE_MAX_ETHPORTS) {
+ DRV_LOG(ERR, "no master device for representor");
+ err = ENODEV;
+ goto error;
+ }
}
/*
* Store associated network device interface index. This index
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 40e474aa7e..b940e6cb38 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1248,8 +1248,8 @@ struct rte_eth_devargs {
* For backward compatibility, if no representor info, direct
* map legacy VF (no controller and pf).
*
- * @param ethdev
- * Handle of ethdev port.
+ * @param port_id
+ * Port ID of the backing device.
* @param type
* Representor type.
* @param controller
@@ -1266,7 +1266,7 @@ struct rte_eth_devargs {
*/
__rte_internal
int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
enum rte_eth_representor_type type,
int controller, int pf, int representor_port,
uint16_t *repr_id);
diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
index 1fe5fa1f36..167d2d798c 100644
--- a/lib/ethdev/rte_class_eth.c
+++ b/lib/ethdev/rte_class_eth.c
@@ -95,14 +95,32 @@ eth_representor_cmp(const char *key __rte_unused,
c = i / (np * nf);
p = (i / nf) % np;
f = i % nf;
- if (rte_eth_representor_id_get(edev,
+ /*
+ * rte_eth_representor_id_get expects to receive port ID of
+ * the master device, but in order to maintain compatibility
+ * with mlx5's hardware bonding and legacy representor
+ * specification using just VF numbers, the representor's port
+ * ID is tried first.
+ */
+ ret = rte_eth_representor_id_get(edev->data->port_id,
eth_da.type,
eth_da.nb_mh_controllers == 0 ? -1 :
eth_da.mh_controllers[c],
eth_da.nb_ports == 0 ? -1 : eth_da.ports[p],
eth_da.nb_representor_ports == 0 ? -1 :
eth_da.representor_ports[f],
- &id) < 0)
+ &id);
+ if (ret == -ENOTSUP)
+ ret = rte_eth_representor_id_get(
+ edev->data->parent_port_id,
+ eth_da.type,
+ eth_da.nb_mh_controllers == 0 ? -1 :
+ eth_da.mh_controllers[c],
+ eth_da.nb_ports == 0 ? -1 : eth_da.ports[p],
+ eth_da.nb_representor_ports == 0 ? -1 :
+ eth_da.representor_ports[f],
+ &id);
+ if (ret < 0)
continue;
if (data->representor_id == id)
return 0;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 9d95cd11e1..228ef7bf23 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -5997,7 +5997,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
}
int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
enum rte_eth_representor_type type,
int controller, int pf, int representor_port,
uint16_t *repr_id)
@@ -6013,7 +6013,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
return -EINVAL;
/* Get PMD representor range info. */
- ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
+ ret = rte_eth_representor_info_get(port_id, NULL);
if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
controller == -1 && pf == -1) {
/* Direct mapping for legacy VF representor. */
@@ -6028,7 +6028,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
if (info == NULL)
return -ENOMEM;
info->nb_ranges_alloc = n;
- ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
+ ret = rte_eth_representor_info_get(port_id, info);
if (ret < 0)
goto out;
@@ -6047,7 +6047,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
continue;
if (info->ranges[i].id_end < info->ranges[i].id_base) {
RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
- ethdev->data->port_id, info->ranges[i].id_base,
+ port_id, info->ranges[i].id_base,
info->ranges[i].id_end, i);
continue;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..13cb84b52f 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -185,6 +185,10 @@ struct rte_eth_dev_data {
/**< Switch-specific identifier.
* Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
*/
+ uint16_t parent_port_id;
+ /**< Port ID of the backing device.
+ * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+ */
pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
uint64_t reserved_64s[4]; /**< Reserved for future fields */
--
2.30.2
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v14 0/9] eal: Add EAL API for threading
@ 2021-08-19 21:31 3% ` Narcisa Ana Maria Vasile
0 siblings, 0 replies; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-08-19 21:31 UTC (permalink / raw)
To: dev, thomas, dmitry.kozliuk, khot, navasile, dmitrym, roretzla,
talshn, ocardona
Cc: bruce.richardson, david.marchand, pallavi.kadam
From: Narcisa Vasile <navasile@microsoft.com>
EAL thread API
**Problem Statement**
DPDK currently uses the pthread interface to create and manage threads.
Windows does not support the POSIX thread programming model,
so it currently
relies on a header file that hides the Windows calls under
pthread matched interfaces. Given that EAL should isolate the environment
specifics from the applications and libraries and mediate
all the communication with the operating systems, a new EAL interface
is needed for thread management.
**Goals**
* Introduce a generic EAL API for threading support that will remove
the current Windows pthread.h shim.
* Replace references to pthread_* across the DPDK codebase with the new
RTE_THREAD_* API.
* Allow users to choose between using the RTE_THREAD_* API or a
3rd party thread library through a configuration option.
**Design plan**
New API main files:
* rte_thread.h (librte_eal/include)
* rte_thread.c (librte_eal/windows)
* rte_thread.c (librte_eal/common)
**A schematic example of the design**
--------------------------------------------------
lib/librte_eal/include/rte_thread.h
int rte_thread_create();
lib/librte_eal/common/rte_thread.c
int rte_thread_create()
{
return pthread_create();
}
lib/librte_eal/windows/rte_thread.c
int rte_thread_create()
{
return CreateThread();
}
-----------------------------------------------------
**Thread attributes**
When or after a thread is created, specific characteristics of the thread
can be adjusted. Given that the thread characteristics that are of interest
for DPDK applications are affinity and priority, the following structure
that represents thread attributes has been defined:
typedef struct
{
enum rte_thread_priority priority;
rte_cpuset_t cpuset;
} rte_thread_attr_t;
The *rte_thread_create()* function can optionally receive
an rte_thread_attr_t
object that will cause the thread to be created with the
affinity and priority
described by the attributes object. If no rte_thread_attr_t is passed
(parameter is NULL), the default affinity and priority are used.
An rte_thread_attr_t object can also be set to the default values
by calling *rte_thread_attr_init()*.
*Priority* is represented through an enum that currently advertises
two values for priority:
- RTE_THREAD_PRIORITY_NORMAL
- RTE_THREAD_PRIORITY_REALTIME_CRITICAL
The enum can be extended to allow for multiple priority levels.
rte_thread_set_priority - sets the priority of a thread
rte_thread_attr_set_priority - updates an rte_thread_attr_t object
with a new value for priority
*Affinity* is described by the already known “rte_cpuset_t” type.
rte_thread_attr_set/get_affinity - sets/gets the affinity field in a
rte_thread_attr_t object
rte_thread_set/get_affinity – sets/gets the affinity of a thread
**Errors**
A translation function that maps Windows error codes to errno-style
error codes is provided.
**Future work**
The long term plan is for EAL to provide full threading support:
* Add support for conditional variables
* Add support for pthread_mutex_trylock
* Additional functionality offered by pthread_*
(such as pthread_setname_np, etc.)
v14:
- Remove patch "eal: add EAL argument for setting thread priority"
This will be added later when enabling the new threading API.
- Remove priority enum value "_UNDEFINED". NORMAL is used
as the default.
- Fix issue with thread return value.
v13:
- Fix syntax error in unit tests
v12:
- Fix freebsd warning about initializer in unit tests
v11:
- Add unit tests for thread API
- Rebase
v10:
- Remove patch no. 10. It will be broken down in subpatches
and sent as a different patchset that depends on this one.
This is done due to the ABI breaks that would be caused by patch 10.
- Replace unix/rte_thread.c with common/rte_thread.c
- Remove initializations that may prevent compiler from issuing useful
warnings.
- Remove rte_thread_types.h and rte_windows_thread_types.h
- Remove unneeded priority macros (EAL_THREAD_PRIORITY*)
- Remove functions that retrieves thread handle from process handle
- Remove rte_thread_cancel() until same behavior is obtained on
all platforms.
- Fix rte_thread_detach() function description,
return value and remove empty line.
- Reimplement mutex functions. Add compatible representation for mutex
identifier. Add macro to replace static mutex initialization instances.
- Fix commit messages (lines too long, remove unicode symbols)
v9:
- Sign patches
v8:
- Rebase
- Add rte_thread_detach() API
- Set default priority, when user did not specify a value
v7:
Based on DmitryK's review:
- Change thread id representation
- Change mutex id representation
- Implement static mutex inititalizer for Windows
- Change barrier identifier representation
- Improve commit messages
- Add missing doxygen comments
- Split error translation function
- Improve name for affinity function
- Remove cpuset_size parameter
- Fix eal_create_cpu_map function
- Map EAL priority values to OS specific values
- Add thread wrapper for start routine
- Do not export rte_thread_cancel() on Windows
- Cleanup, fix comments, fix typos.
v6:
- improve error-translation function
- call the error translation function in rte_thread_value_get()
v5:
- update cover letter with more details on the priority argument
v4:
- fix function description
- rebase
v3:
- rebase
v2:
- revert changes that break ABI
- break up changes into smaller patches
- fix coding style issues
- fix issues with errors
- fix parameter type in examples/kni.c
Narcisa Vasile (9):
eal: add basic threading functions
eal: add thread attributes
eal/windows: translate Windows errors to errno-style errors
eal: implement functions for thread affinity management
eal: implement thread priority management functions
eal: add thread lifetime management
eal: implement functions for mutex management
eal: implement functions for thread barrier management
Add unit tests for thread API
app/test/meson.build | 2 +
app/test/test_threads.c | 419 +++++++++++++++++++++++
lib/eal/common/meson.build | 1 +
lib/eal/common/rte_thread.c | 441 ++++++++++++++++++++++++
lib/eal/include/rte_thread.h | 404 +++++++++++++++++++++-
lib/eal/unix/meson.build | 1 -
lib/eal/unix/rte_thread.c | 92 -----
lib/eal/version.map | 20 ++
lib/eal/windows/eal_lcore.c | 176 +++++++---
lib/eal/windows/eal_windows.h | 10 +
lib/eal/windows/include/sched.h | 2 +-
lib/eal/windows/rte_thread.c | 584 ++++++++++++++++++++++++++++++--
12 files changed, 1979 insertions(+), 173 deletions(-)
create mode 100644 app/test/test_threads.c
create mode 100644 lib/eal/common/rte_thread.c
delete mode 100644 lib/eal/unix/rte_thread.c
--
2.31.0.vfs.0.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 1/6] bbdev: add capability for CRC16 check
@ 2021-08-19 21:10 4% ` Nicolas Chautru
2021-09-01 13:36 3% ` Tom Rix
0 siblings, 1 reply; 200+ results
From: Nicolas Chautru @ 2021-08-19 21:10 UTC (permalink / raw)
To: dev, gakhil
Cc: thomas, trix, hemant.agrawal, mingshan.zhang, arun.joshi,
Nicolas Chautru
Adding a missing operation when CRC16
is being used for TB CRC check.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
app/test-bbdev/test_bbdev_vector.c | 2 ++
doc/guides/prog_guide/bbdev.rst | 3 +++
doc/guides/rel_notes/release_21_11.rst | 1 +
lib/bbdev/rte_bbdev_op.h | 34 ++++++++++++++++++----------------
4 files changed, 24 insertions(+), 16 deletions(-)
diff --git a/app/test-bbdev/test_bbdev_vector.c b/app/test-bbdev/test_bbdev_vector.c
index 614dbd1..8d796b1 100644
--- a/app/test-bbdev/test_bbdev_vector.c
+++ b/app/test-bbdev/test_bbdev_vector.c
@@ -167,6 +167,8 @@
*op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK;
else if (!strcmp(token, "RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP"))
*op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP;
+ else if (!strcmp(token, "RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK"))
+ *op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK;
else if (!strcmp(token, "RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS"))
*op_flag_value = RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS;
else if (!strcmp(token, "RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE"))
diff --git a/doc/guides/prog_guide/bbdev.rst b/doc/guides/prog_guide/bbdev.rst
index 9619280..8bd7cba 100644
--- a/doc/guides/prog_guide/bbdev.rst
+++ b/doc/guides/prog_guide/bbdev.rst
@@ -891,6 +891,9 @@ given below.
|RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP |
| Set to drop the last CRC bits decoding output |
+--------------------------------------------------------------------+
+|RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK |
+| Set for code block CRC-16 checking |
++--------------------------------------------------------------------+
|RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS |
| Set for bit-level de-interleaver bypass on input stream |
+--------------------------------------------------------------------+
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d707a55..69dd518 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -84,6 +84,7 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* bbdev: Added capability related to more comprehensive CRC options.
ABI Changes
-----------
diff --git a/lib/bbdev/rte_bbdev_op.h b/lib/bbdev/rte_bbdev_op.h
index f946842..7c44ddd 100644
--- a/lib/bbdev/rte_bbdev_op.h
+++ b/lib/bbdev/rte_bbdev_op.h
@@ -142,51 +142,53 @@ enum rte_bbdev_op_ldpcdec_flag_bitmasks {
RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK = (1ULL << 1),
/** Set to drop the last CRC bits decoding output */
RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP = (1ULL << 2),
+ /** Set for transport block CRC-16 checking */
+ RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK = (1ULL << 3),
/** Set for bit-level de-interleaver bypass on Rx stream. */
- RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS = (1ULL << 3),
+ RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS = (1ULL << 4),
/** Set for HARQ combined input stream enable. */
- RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE = (1ULL << 4),
+ RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE = (1ULL << 5),
/** Set for HARQ combined output stream enable. */
- RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE = (1ULL << 5),
+ RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE = (1ULL << 6),
/** Set for LDPC decoder bypass.
* RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE must be set.
*/
- RTE_BBDEV_LDPC_DECODE_BYPASS = (1ULL << 6),
+ RTE_BBDEV_LDPC_DECODE_BYPASS = (1ULL << 7),
/** Set for soft-output stream enable */
- RTE_BBDEV_LDPC_SOFT_OUT_ENABLE = (1ULL << 7),
+ RTE_BBDEV_LDPC_SOFT_OUT_ENABLE = (1ULL << 8),
/** Set for Rate-Matching bypass on soft-out stream. */
- RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS = (1ULL << 8),
+ RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS = (1ULL << 9),
/** Set for bit-level de-interleaver bypass on soft-output stream. */
- RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS = (1ULL << 9),
+ RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS = (1ULL << 10),
/** Set for iteration stopping on successful decode condition
* i.e. a successful syndrome check.
*/
- RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE = (1ULL << 10),
+ RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE = (1ULL << 11),
/** Set if a device supports decoder dequeue interrupts. */
- RTE_BBDEV_LDPC_DEC_INTERRUPTS = (1ULL << 11),
+ RTE_BBDEV_LDPC_DEC_INTERRUPTS = (1ULL << 12),
/** Set if a device supports scatter-gather functionality. */
- RTE_BBDEV_LDPC_DEC_SCATTER_GATHER = (1ULL << 12),
+ RTE_BBDEV_LDPC_DEC_SCATTER_GATHER = (1ULL << 13),
/** Set if a device supports input/output HARQ compression. */
- RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION = (1ULL << 13),
+ RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION = (1ULL << 14),
/** Set if a device supports input LLR compression. */
- RTE_BBDEV_LDPC_LLR_COMPRESSION = (1ULL << 14),
+ RTE_BBDEV_LDPC_LLR_COMPRESSION = (1ULL << 15),
/** Set if a device supports HARQ input from
* device's internal memory.
*/
- RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE = (1ULL << 15),
+ RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE = (1ULL << 16),
/** Set if a device supports HARQ output to
* device's internal memory.
*/
- RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE = (1ULL << 16),
+ RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE = (1ULL << 17),
/** Set if a device supports loop-back access to
* HARQ internal memory. Intended for troubleshooting.
*/
- RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK = (1ULL << 17),
+ RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK = (1ULL << 18),
/** Set if a device includes LLR filler bits in the circular buffer
* for HARQ memory. If not set, it is assumed the filler bits are not
* in HARQ memory and handled directly by the LDPC decoder.
*/
- RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS = (1ULL << 18)
+ RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS = (1ULL << 19)
};
/** Flags for LDPC encoder operation and capability structure */
--
1.8.3.1
^ permalink raw reply [relevance 4%]
Results 3201-3400 of ~18000 next (older) | prev (newer) | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2020-12-18 15:16 [dpdk-dev] [RFC 1/9] devargs: fix data buffer storage type Xueming Li
2021-04-13 3:14 ` [dpdk-dev] [PATCH v5 0/5] eal: enable global device syntax by default Xueming Li
2021-04-14 19:49 ` Thomas Monjalon
2021-09-02 14:46 0% ` Thomas Monjalon
2021-01-10 18:44 [dpdk-dev] [PATCH] doc: add release milestones definition Asaf Penso
2021-08-26 10:11 5% ` [dpdk-dev] [PATCH v8] " Thomas Monjalon
2021-09-02 16:33 0% ` Ferruh Yigit
2021-02-24 21:20 [dpdk-dev] [RFC 0/5] Use correct memory ordering in eal functions Honnappa Nagarahalli
2021-09-09 23:13 ` [dpdk-dev] [PATCH v2 0/6] " Honnappa Nagarahalli
2021-09-09 23:13 ` [dpdk-dev] [PATCH v2 4/6] eal: update rte_eal_wait_lcore definition Honnappa Nagarahalli
2021-09-09 23:37 3% ` Honnappa Nagarahalli
2021-06-01 11:14 [dpdk-dev] [RFC PATCH] ethdev: clarify flow action PORT ID semantics Ivan Malov
2021-09-03 7:46 3% ` [dpdk-dev] [PATCH v1] " Andrew Rybchenko
2021-06-18 16:36 [dpdk-dev] [PATCH] devtools: script to track map symbols Ray Kinsella
2021-08-31 14:50 3% ` [dpdk-dev] [PATCH v10 0/3] devtools: scripts to count and track symbols Ray Kinsella
2021-08-31 14:50 5% ` [dpdk-dev] [PATCH v10 1/3] devtools: script to track symbols over releases Ray Kinsella
2021-08-31 14:50 5% ` [dpdk-dev] [PATCH v10 2/3] devtools: script to send notifications of expired symbols Ray Kinsella
2021-09-01 12:46 0% ` Aaron Conole
2021-09-03 11:15 0% ` Kinsella, Ray
2021-09-03 13:32 0% ` Kinsella, Ray
2021-09-01 13:01 5% ` David Marchand
2021-09-03 13:28 0% ` Kinsella, Ray
2021-08-31 14:50 17% ` [dpdk-dev] [PATCH v10 3/3] maintainers: add new abi scripts Ray Kinsella
2021-09-01 12:31 0% ` [dpdk-dev] [PATCH v10 0/3] devtools: scripts to count and track symbols Aaron Conole
2021-09-03 13:23 3% ` [dpdk-dev] [PATCH v11 " Ray Kinsella
2021-09-03 13:23 5% ` [dpdk-dev] [PATCH v11 1/3] devtools: script to track symbols over releases Ray Kinsella
2021-09-03 13:23 5% ` [dpdk-dev] [PATCH v11 2/3] devtools: script to send notifications of expired symbols Ray Kinsella
2021-09-03 13:23 17% ` [dpdk-dev] [PATCH v11 3/3] maintainers: add new abi scripts Ray Kinsella
2021-09-08 15:12 3% ` [dpdk-dev] [PATCH v11 0/3] devtools: scripts to count and track symbols Ray Kinsella
2021-09-08 15:12 5% ` [dpdk-dev] [PATCH v11 1/3] devtools: script to track symbols over releases Ray Kinsella
2021-09-08 15:12 5% ` [dpdk-dev] [PATCH v11 2/3] devtools: script to send notifications of expired symbols Ray Kinsella
2021-09-08 15:12 17% ` [dpdk-dev] [PATCH v11 3/3] maintainers: add new abi scripts Ray Kinsella
2021-09-08 15:13 3% ` [dpdk-dev] [PATCH v12 0/4] devtools: scripts to count and track symbols Ray Kinsella
2021-09-08 15:13 5% ` [dpdk-dev] [PATCH v12 1/4] devtools: script to track symbols over releases Ray Kinsella
2021-09-08 15:13 5% ` [dpdk-dev] [PATCH v12 2/4] devtools: script to send notifications of expired symbols Ray Kinsella
2021-09-08 15:13 17% ` [dpdk-dev] [PATCH v12 3/4] maintainers: add new abi scripts Ray Kinsella
2021-09-09 13:48 3% ` [dpdk-dev] [PATCH v13 0/4] devtools: scripts to count and track symbols Ray Kinsella
2021-09-09 13:48 5% ` [dpdk-dev] [PATCH v13 1/4] devtools: script to track symbols over releases Ray Kinsella
2021-09-09 13:48 5% ` [dpdk-dev] [PATCH v13 2/4] devtools: script to send notifications of expired symbols Ray Kinsella
2021-09-09 13:48 19% ` [dpdk-dev] [PATCH v13 3/4] maintainers: add new abi scripts Ray Kinsella
2021-06-24 10:28 [dpdk-dev] [PATCH 1/2] security: enforce semantics for Tx inline processing Akhil Goyal
2021-08-12 12:32 ` [dpdk-dev] [PATCH v4 0/4] security: Improve inline fast path routines Nithin Dabilpuram
2021-08-12 12:32 ` [dpdk-dev] [PATCH v4 4/4] doc: remove deprecation notice for security fast path change Nithin Dabilpuram
2021-09-06 18:57 3% ` Akhil Goyal
2021-06-29 13:46 [dpdk-dev] [PATCH] ethdev: add namespace Ferruh Yigit
2021-08-27 1:19 1% ` [dpdk-dev] [PATCH v2] " Ferruh Yigit
2021-08-30 17:19 1% ` [dpdk-dev] [PATCH v3] " Ferruh Yigit
2021-07-02 13:18 [dpdk-dev] [PATCH] dmadev: introduce DMA device library Chengwen Feng
2021-09-04 10:10 3% ` [dpdk-dev] [PATCH v20 0/7] support dmadev Chengwen Feng
2021-09-06 13:37 0% ` Bruce Richardson
2021-09-07 12:56 3% ` [dpdk-dev] [PATCH v21 " Chengwen Feng
2021-07-05 8:04 [dpdk-dev] [RFC PATCH v4 0/3] Add PIE support for HQoS library Liguzinski, WojciechX
2021-09-07 7:33 3% ` [dpdk-dev] [RFC PATCH v5 0/5] " Liguzinski, WojciechX
2021-09-07 7:33 ` [dpdk-dev] [RFC PATCH v5 1/5] sched: add PIE based congestion management Liguzinski, WojciechX
2021-09-07 19:14 3% ` Stephen Hemminger
2021-09-08 8:49 3% ` Liguzinski, WojciechX
2021-09-07 14:11 3% ` [dpdk-dev] [RFC PATCH v6 0/5] Add PIE support for HQoS library Liguzinski, WojciechX
2021-07-12 16:17 [dpdk-dev] [PATCH] ethdev: fix representor port ID search by name Andrew Rybchenko
2021-08-18 14:00 ` [dpdk-dev] [PATCH v2] " Andrew Rybchenko
2021-08-27 9:18 0% ` Xueming(Steven) Li
2021-08-20 12:18 3% ` [dpdk-dev] [PATCH v3] " Andrew Rybchenko
2021-08-31 15:41 0% ` Andrew Rybchenko
2021-08-31 16:06 3% ` [dpdk-dev] [PATCH v4] " Andrew Rybchenko
2021-08-31 16:32 0% ` Wang, Haiyue
2021-08-31 16:37 0% ` Andrew Rybchenko
2021-09-01 5:15 0% ` Xing, Beilei
2021-09-01 14:55 0% ` Ferruh Yigit
2021-07-27 3:42 [dpdk-dev] [RFC] ethdev: introduce shared Rx queue Xueming Li
2021-08-11 14:04 ` [dpdk-dev] [PATCH v2 01/15] " Xueming Li
2021-08-17 9:33 ` Jerin Jacob
2021-08-17 11:31 ` Xueming(Steven) Li
2021-08-17 15:11 ` Jerin Jacob
2021-08-18 11:14 ` Xueming(Steven) Li
2021-08-19 5:26 ` Jerin Jacob
2021-08-19 12:09 ` Xueming(Steven) Li
2021-08-26 11:58 4% ` Jerin Jacob
2021-08-28 14:16 0% ` Xueming(Steven) Li
2021-08-30 9:31 3% ` Jerin Jacob
2021-08-30 10:13 0% ` Xueming(Steven) Li
2021-07-30 21:44 [dpdk-dev] [PATCH] doc: abstract the behaviour of rte_ctrl_thread_create Honnappa Nagarahalli
2021-08-07 14:55 ` Thomas Monjalon
2021-08-09 13:18 ` Honnappa Nagarahalli
2021-08-23 9:40 3% ` Olivier Matz
2021-08-23 21:18 0% ` Honnappa Nagarahalli
2021-08-03 19:01 [dpdk-dev] [PATCH v13 00/10] eal: Add EAL API for threading Narcisa Ana Maria Vasile
2021-08-19 21:31 3% ` [dpdk-dev] [PATCH v14 0/9] " Narcisa Ana Maria Vasile
2021-08-08 19:26 [dpdk-dev] [PATCH] version: 21.11-rc0 Thomas Monjalon
2021-08-17 6:34 ` David Marchand
2021-08-17 12:04 ` [dpdk-dev] [dpdk-ci] " Lincoln Lavoie
2021-08-24 7:58 3% ` David Marchand
2021-08-24 12:19 3% ` Lincoln Lavoie
2021-08-10 19:50 [dpdk-dev] [PATCH v2 0/4] cryptodev: expose driver interface as internal Akhil Goyal
2021-09-07 19:00 ` [dpdk-dev] [PATCH v3 " Akhil Goyal
2021-09-07 19:00 3% ` [dpdk-dev] [PATCH v3 2/4] cryptodev: change valid dev API Akhil Goyal
2021-09-07 19:00 2% ` [dpdk-dev] [PATCH v3 4/4] cryptodev: expose driver interface as internal Akhil Goyal
2021-09-07 19:22 ` [dpdk-dev] [PATCH v4 0/4] " Akhil Goyal
2021-09-07 19:22 3% ` [dpdk-dev] [PATCH v4 2/4] cryptodev: change valid dev API Akhil Goyal
2021-09-08 15:16 0% ` Akhil Goyal
2021-09-07 19:22 2% ` [dpdk-dev] [PATCH v4 4/4] cryptodev: expose driver interface as internal Akhil Goyal
2021-08-16 5:59 [dpdk-dev] [PATCH 0/3] Add user specified IV with lookaside IPsec Anoob Joseph
2021-09-06 14:58 ` [dpdk-dev] [PATCH v2 " Anoob Joseph
2021-09-06 14:58 4% ` [dpdk-dev] [PATCH v2 1/3] security: support user specified IV Anoob Joseph
2021-09-07 16:17 3% ` [dpdk-dev] [PATCH v3 0/3] Add user specified IV with lookaside IPsec Anoob Joseph
2021-09-07 16:17 5% ` [dpdk-dev] [PATCH v3 1/3] security: support user specified IV Anoob Joseph
2021-08-17 13:42 [dpdk-dev] [PATCH 0/5] Add SA lifetime in security Anoob Joseph
2021-09-07 16:32 ` [dpdk-dev] [PATCH v2 0/6] " Anoob Joseph
2021-09-07 16:32 8% ` [dpdk-dev] [PATCH v2 1/6] security: add SA lifetime configuration Anoob Joseph
2021-08-17 17:38 [dpdk-dev] [PATCH v5] app/testpmd: fix testpmd doesn't show RSS hash offload Jie Wang
2021-08-24 18:19 ` [dpdk-dev] [PATCH v6 0/2] testpmd shows incorrect rx_offload configuration Jie Wang
2021-08-24 18:19 ` [dpdk-dev] [PATCH v6 1/2] ethdev: add an API to get device configuration info Jie Wang
2021-08-25 20:07 3% ` Ferruh Yigit
2021-08-26 6:00 0% ` Ajit Khaparde
2021-08-18 23:26 [dpdk-dev] [PATCH v7] eal: remove sys/queue.h from public headers William Tu
2021-08-23 13:03 1% ` [dpdk-dev] [PATCH v8] " William Tu
2021-08-24 16:21 1% ` [dpdk-dev] [PATCH v9] " William Tu
2021-08-19 21:10 [dpdk-dev] [PATCH v2 0/6] bbdev update related to CRC usage Nicolas Chautru
2021-08-19 21:10 4% ` [dpdk-dev] [PATCH v2 1/6] bbdev: add capability for CRC16 check Nicolas Chautru
2021-09-01 13:36 3% ` Tom Rix
2021-09-01 15:00 3% ` Chautru, Nicolas
2021-09-11 19:11 0% ` Tom Rix
2021-08-20 16:28 3% [dpdk-dev] [RFC 0/7] hide eth dev related structures Konstantin Ananyev
2021-08-20 16:28 2% ` [dpdk-dev] [RFC 1/7] eth: move ethdev 'burst' API into separate structure Konstantin Ananyev
2021-08-26 12:37 3% ` [dpdk-dev] [RFC 0/7] hide eth dev related structures Jerin Jacob
2021-09-06 18:09 0% ` Ferruh Yigit
2021-08-21 14:07 0% [dpdk-dev] [PATCH 21.11 v2 0/3] octeontx build only on 64-bit Linux Pavan Nikhilesh Bhagavatula
2021-08-23 10:02 [dpdk-dev] [PATCH] RFC: ethdev: add reassembly offload Akhil Goyal
2021-09-07 8:47 4% ` Ferruh Yigit
2021-09-08 10:29 0% ` [dpdk-dev] [EXT] " Anoob Joseph
2021-09-13 6:56 0% ` Xu, Rosen
2021-09-13 7:22 0% ` Andrew Rybchenko
2021-08-23 19:40 [dpdk-dev] [RFC 01/15] eventdev: make driver interface as internal pbhagavatula
2021-08-23 19:40 2% ` [dpdk-dev] [RFC 04/15] eventdev: move inline APIs into separate structure pbhagavatula
2021-09-08 12:03 0% ` Kinsella, Ray
2021-08-23 19:40 ` [dpdk-dev] [RFC 11/15] eventdev: reserve fields in timer object pbhagavatula
2021-08-24 15:10 3% ` Stephen Hemminger
2021-09-01 6:48 0% ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
2021-09-07 21:02 0% ` Carrillo, Erik G
2021-08-25 11:27 [dpdk-dev] [PATCH] doc: announce change in dma mapping/unmapping Xuan Ding
2021-08-25 11:47 ` Burakov, Anatoly
2021-08-26 9:29 ` Ferruh Yigit
2021-08-26 9:46 3% ` Burakov, Anatoly
2021-08-26 10:09 3% ` Bruce Richardson
2021-08-26 10:14 0% ` Burakov, Anatoly
2021-08-31 13:42 0% ` Ding, Xuan
2021-08-26 10:35 15% [dpdk-dev] [PATCH] doc: announce library refactor for ABI improvement Ferruh Yigit
2021-08-26 10:46 4% ` [dpdk-dev] [EXT] " Akhil Goyal
2021-08-26 10:47 4% ` Jerin Jacob
2021-08-26 11:04 4% ` Bruce Richardson
2021-08-26 15:44 4% ` Andrew Rybchenko
2021-08-31 15:48 4% ` Kinsella, Ray
2021-08-26 14:57 4% [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
2021-08-26 14:57 1% ` [dpdk-dev] [RFC 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
2021-09-03 12:40 4% ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
2021-09-03 12:40 1% ` [dpdk-dev] [PATCH v1 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
2021-08-27 1:47 4% [dpdk-dev] [Bug 797] [dpdk-21.11]Segmentation fault when start txonly packet forward after set txpkts=40, 64 and txsplit=rand bugzilla
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
2021-08-27 6:56 2% ` [dpdk-dev] [PATCH 01/38] common/sfc_efx/base: update MCDI headers Andrew Rybchenko
2021-08-29 8:48 4% [dpdk-dev] Marvell v21.11 Roadmap Jerin Jacob Kollanukkaran
2021-08-29 12:51 [dpdk-dev] [PATCH 0/8] cryptodev: hide internal strutures Akhil Goyal
2021-08-29 12:51 3% ` [dpdk-dev] [PATCH 2/8] cryptodev: move inline APIs into separate structure Akhil Goyal
2021-08-30 10:25 [dpdk-dev] [RFC 01/15] eventdev: make driver interface as internal Mattias Rönnblom
2021-08-30 16:00 2% ` [dpdk-dev] [RFC] eventdev: uninline inline API functions Mattias Rönnblom
2021-08-31 12:28 0% ` Jerin Jacob
2021-08-31 12:34 0% ` Mattias Rönnblom
2021-08-30 19:44 [dpdk-dev] [PATCH] eventdev: update crypto adapter metadata structures Shijith Thotton
2021-08-30 19:59 ` [dpdk-dev] [PATCH v2] " Shijith Thotton
2021-08-31 6:08 3% ` Akhil Goyal
2021-08-31 6:51 0% ` Shijith Thotton
2021-08-31 7:56 6% ` [dpdk-dev] [PATCH v3] " Shijith Thotton
2021-09-07 8:34 0% ` Jerin Jacob
2021-09-07 8:53 0% ` Gujjar, Abhinandan S
2021-09-07 10:37 0% ` Shijith Thotton
2021-09-07 17:30 0% ` Gujjar, Abhinandan S
2021-09-08 7:42 0% ` Shijith Thotton
2021-09-08 7:53 0% ` Gujjar, Abhinandan S
2021-08-31 13:10 [dpdk-dev] [PATCH] doc: announce change in vfio dma mapping Xuan Ding
2021-08-31 16:01 3% ` Ferruh Yigit
2021-09-01 1:41 0% ` Ding, Xuan
2021-09-01 9:56 3% ` Ferruh Yigit
2021-09-01 11:01 0% ` Burakov, Anatoly
2021-09-01 11:42 4% ` Ferruh Yigit
2021-09-01 13:25 4% ` Burakov, Anatoly
2021-09-02 9:50 3% ` Ferruh Yigit
2021-09-02 16:13 0% ` Kinsella, Ray
2021-09-06 8:51 0% ` Ding, Xuan
2021-09-06 13:43 0% ` Ferruh Yigit
2021-09-07 15:21 0% ` Burakov, Anatoly
2021-09-07 16:08 0% ` Ferruh Yigit
2021-08-31 16:25 [dpdk-dev] [PATCH v1] bbdev: remove experimental tag from API Nicolas Chautru
2021-08-31 16:25 3% ` Nicolas Chautru
2021-08-31 16:38 ` David Marchand
2021-09-01 3:52 ` Hemant Agrawal
2021-09-01 11:14 ` Kinsella, Ray
2021-09-01 14:54 3% ` Chautru, Nicolas
2021-09-01 5:07 3% [dpdk-dev] [PATCH v1 0/3] Promote some API to stable Haiyue Wang
2021-09-01 5:07 3% ` [dpdk-dev] [PATCH v1 1/3] net/ixgbe: promote " Haiyue Wang
2021-09-01 9:02 0% ` Ferruh Yigit
2021-09-01 11:13 3% ` Wang, Haiyue
2021-09-01 13:55 0% ` Kinsella, Ray
2021-09-01 5:07 3% ` [dpdk-dev] [PATCH v1 2/3] net/ice: " Haiyue Wang
2021-09-01 5:07 3% ` [dpdk-dev] [PATCH v1 3/3] ethdev: promote burst mode " Haiyue Wang
2021-09-01 9:07 0% ` Ferruh Yigit
2021-09-01 13:49 0% ` Kinsella, Ray
2021-09-06 5:56 3% ` [dpdk-dev] [PATCH v2] " Haiyue Wang
2021-09-06 6:36 0% ` Andrew Rybchenko
2021-09-02 17:59 [dpdk-dev] [PATCH v2 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
2021-09-02 17:59 ` [dpdk-dev] [PATCH v2 1/5] net/enetfec: introduce " Apeksha Gupta
2021-09-03 7:14 3% ` David Marchand
2021-09-10 3:47 0% ` [dpdk-dev] [EXT] " Apeksha Gupta
2021-09-03 0:47 [dpdk-dev] [PATCH 0/5] Packet capture framework enhancements Stephen Hemminger
2021-09-03 0:47 1% ` [dpdk-dev] [PATCH 2/5] pdump: support pcapng and filtering Stephen Hemminger
2021-09-03 0:47 2% ` [dpdk-dev] [PATCH 4/5] doc: changes for new pcapng and dumpcap Stephen Hemminger
2021-09-03 22:06 ` [dpdk-dev] [PATCH v2 0/5] Packet capture framework enhancements Stephen Hemminger
2021-09-03 22:06 1% ` [dpdk-dev] [PATCH v2 2/5] pdump: support pcapng and filtering Stephen Hemminger
2021-09-03 22:06 1% ` [dpdk-dev] [PATCH v2 4/5] doc: changes for new pcapng and dumpcap Stephen Hemminger
2021-09-08 4:50 ` [dpdk-dev] [PATCH v3 0/8] Packet capture framework enhancements Stephen Hemminger
2021-09-08 4:50 1% ` [dpdk-dev] [PATCH v3 5/8] pdump: support pcapng and filtering Stephen Hemminger
2021-09-08 4:50 1% ` [dpdk-dev] [PATCH v3 7/8] doc: changes for new pcapng and dumpcap Stephen Hemminger
2021-09-08 17:16 ` [dpdk-dev] [PATCH v4 0/8] Packet capture framework enhancements Stephen Hemminger
2021-09-08 17:16 1% ` [dpdk-dev] [PATCH v4 5/8] pdump: support pcapng and filtering Stephen Hemminger
2021-09-08 17:16 1% ` [dpdk-dev] [PATCH v4 7/8] doc: changes for new pcapng and dumpcap Stephen Hemminger
2021-09-08 21:50 ` [dpdk-dev] [PATCH v5 0/9] Packet capture framework enhancements Stephen Hemminger
2021-09-08 21:50 1% ` [dpdk-dev] [PATCH v5 6/9] pdump: support pcapng and filtering Stephen Hemminger
2021-09-08 21:50 1% ` [dpdk-dev] [PATCH v5 8/9] doc: changes for new pcapng and dumpcap Stephen Hemminger
2021-09-09 23:33 ` [dpdk-dev] [PATCH v6 00/10] Packet capture framework enhancements Stephen Hemminger
2021-09-09 23:33 1% ` [dpdk-dev] [PATCH v6 07/10] pdump: support pcapng and filtering Stephen Hemminger
2021-09-09 23:33 1% ` [dpdk-dev] [PATCH v6 09/10] doc: changes for new pcapng and dumpcap Stephen Hemminger
2021-09-10 18:18 ` [dpdk-dev] [PATCH v7 00/11] Packet capture framework enhancements Stephen Hemminger
2021-09-10 18:18 1% ` [dpdk-dev] [PATCH v7 06/11] pdump: support pcapng and filtering Stephen Hemminger
2021-09-10 18:18 1% ` [dpdk-dev] [PATCH v7 10/11] doc: changes for new pcapng and dumpcap Stephen Hemminger
2021-09-03 16:31 3% [dpdk-dev] DPDK Release Status Meeting 2021-09-02 Mcnamara, John
2021-09-06 16:01 [dpdk-dev] [PATCH] fib: promote experimental API's to stable Vladimir Medvedkin
2021-09-07 0:34 ` Stephen Hemminger
2021-09-08 13:57 3% ` Medvedkin, Vladimir
2021-09-08 15:10 0% ` Stephen Hemminger
2021-09-06 16:55 [dpdk-dev] [RFC PATCH v2] raw/ptdma: introduce ptdma driver Selwin Sebastian
2021-09-06 17:17 3% ` David Marchand
2021-09-07 13:37 [dpdk-dev] [PATCH] crypto/dpaa_sec: update release notes Hemant Agrawal
2021-09-07 19:15 3% ` [dpdk-dev] [EXT] " Akhil Goyal
2021-09-08 7:02 0% ` Hemant Agrawal
2021-09-08 1:15 [dpdk-dev] [PATCH v3 0/6] bbdev update related to CRC usage Nicolas Chautru
2021-09-08 1:15 4% ` [dpdk-dev] [PATCH v3 1/6] bbdev: add capability for CRC16 check Nicolas Chautru
2021-09-12 12:35 0% ` Tom Rix
2021-09-08 8:21 [dpdk-dev] [PATCH 0/3] add option to configure tunnel header verification Tejasree Kondoj
2021-09-08 8:21 5% ` [dpdk-dev] [PATCH 1/3] security: " Tejasree Kondoj
2021-09-08 7:46 0% ` Hemant Agrawal
2021-09-08 10:42 3% ` Akhil Goyal
2021-09-08 8:25 [dpdk-dev] [PATCH 0/3] add option to configure UDP ports verification Tejasree Kondoj
2021-09-08 8:25 5% ` [dpdk-dev] [PATCH 1/3] security: " Tejasree Kondoj
2021-09-08 7:42 0% ` Hemant Agrawal
2021-09-08 10:59 3% [dpdk-dev] [PATCH] net: promote make rarp packet API as stable Xiao Wang
2021-09-08 4:52 0% ` Stephen Hemminger
2021-09-08 5:07 0% ` Xia, Chenbo
2021-09-09 12:54 3% [dpdk-dev] [PATCH 0/2] eventdev: add port usage hints Harry van Haaren
2021-09-09 13:45 [dpdk-dev] build: Increase the default value of RTE_MAX_LCORE David Hunt
2021-09-09 13:45 ` [dpdk-dev] [PATCH v1 1/6] build: increase default of max lcores to 512 David Hunt
2021-09-09 14:37 ` Bruce Richardson
2021-09-10 6:51 ` David Marchand
2021-09-10 7:54 ` Bruce Richardson
2021-09-10 8:06 3% ` David Marchand
2021-09-10 8:24 0% ` Thomas Monjalon
2021-09-09 17:56 [dpdk-dev] [PATCH 00/18] comment spelling errors Stephen Hemminger
2021-09-09 17:56 4% ` [dpdk-dev] [PATCH 08/18] eal: fix typos in comments Stephen Hemminger
2021-09-09 18:10 ` [dpdk-dev] [PATCH v2 00/18] Fix spelling errors Stephen Hemminger
2021-09-09 18:10 4% ` [dpdk-dev] [PATCH v2 08/18] eal: fix typos in comments Stephen Hemminger
2021-09-10 2:23 [dpdk-dev] [PATCH 0/8] Removal of PCI bus ABIs Chenbo Xia
2021-09-10 2:24 3% ` [dpdk-dev] [PATCH 7/8] kni: replace unused variable definition with reserved bytes Chenbo Xia
2021-09-10 2:24 2% ` [dpdk-dev] [PATCH 8/8] bus/pci: remove ABIs in PCI bus Chenbo Xia
2021-09-10 12:30 3% [dpdk-dev] [PATCH v1 1/7] eal: promote IPC API's to stable Anatoly Burakov
2021-09-10 12:30 2% ` [dpdk-dev] [PATCH v1 2/7] fbarray: promote " Anatoly Burakov
2021-09-10 15:52 0% ` Kinsella, Ray
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 3/7] eal: promote malloc " Anatoly Burakov
2021-09-10 15:53 0% ` Kinsella, Ray
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 4/7] mem: promote memseg " Anatoly Burakov
2021-09-10 15:55 0% ` Kinsella, Ray
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 5/7] mem: promote extmem " Anatoly Burakov
2021-09-10 15:56 0% ` Kinsella, Ray
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 6/7] mem: promote DMA mask " Anatoly Burakov
2021-09-10 15:56 0% ` Kinsella, Ray
2021-09-10 12:30 3% ` [dpdk-dev] [PATCH v1 7/7] eal: promote mcfg " Anatoly Burakov
2021-09-10 16:23 0% ` Kinsella, Ray
2021-09-10 15:51 0% ` [dpdk-dev] [PATCH v1 1/7] eal: promote IPC " Kinsella, Ray
2021-09-10 14:16 9% [dpdk-dev] [RFC 1/3] ethdev: update modify field flow action Viacheslav Ovsiienko
[not found] <DU2PR04MB8630C0339D3CFB3CDE952B1389D69@DU2PR04MB8630.eurprd04.prod.outlook.com>
2021-09-10 14:25 4% ` [dpdk-dev] Minutes of Technical Board Meeting, 2021-08-25 Hemant Agrawal
2021-09-10 23:16 [dpdk-dev] [PATCH] cmdline: make cmdline structure opaque Dmitry Kozlyuk
2021-09-10 23:16 8% ` [dpdk-dev] [PATCH] cmdline: reduce ABI Dmitry Kozlyuk
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).